NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M
2013-02-05
This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.
Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary
2012-01-01
Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
ERIC Educational Resources Information Center
Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.
Bandos, Andriy I; Guo, Ben; Gur, David
2017-02-01
The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply unreliable AUC-based inferences. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Enhancements of Bayesian Blocks; Application to Large Light Curve Databases
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2015-01-01
Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G
2004-02-01
Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.
Waveform fitting and geometry analysis for full-waveform lidar feature extraction
NASA Astrophysics Data System (ADS)
Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu
2016-10-01
This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.
Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method
NASA Astrophysics Data System (ADS)
Verachtert, R.; Lombaert, G.; Degrande, G.
2018-03-01
This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam
2014-01-01
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.
Mathematical and Statistical Software Index.
1986-08-01
geometric) mean HMEAN - harmonic mean MEDIAN - median MODE - mode QUANT - quantiles OGIVE - distribution curve IQRNG - interpercentile range RANGE ... range mutliphase pivoting algorithm cross-classification multiple discriminant analysis cross-tabul ation mul tipl e-objecti ve model curve fitting...Statistics). .. .. .... ...... ..... ...... ..... .. 21 *RANGEX (Correct Correlations for Curtailment of Range ). .. .. .... ...... ... 21 *RUMMAGE II (Analysis
A Software Tool for the Rapid Analysis of the Sintering Behavior of Particulate Bodies
2017-11-01
bounded by a region that the user selects via cross hairs . Future plot analysis features, such as more complicated curve fitting and modeling functions...German RM. Grain growth behavior of tungsten heavy alloys based on the master sintering curve concept. Metallurgical and Materials Transactions A
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
NASA Astrophysics Data System (ADS)
McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.
2017-03-01
Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
NASA Technical Reports Server (NTRS)
Cooper, D. B.; Yalabik, N.
1975-01-01
Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.
Runoff potentiality of a watershed through SCS and functional data analysis technique.
Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.
Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique
Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911
Multi-Filter Photometric Analysis of Three β Lyrae-type Eclipsing Binary Stars
NASA Astrophysics Data System (ADS)
Gardner, T.; Hahs, G.; Gokhale, V.
2015-12-01
We present light curve analysis of three variable stars, ASAS J105855+1722.2, NSVS 5066754, and NSVS 9091101. These objects are selected from a list of β- Lyrae candidates published by Hoffman et al. (2008). Light curves are generated using data collected at the the 31-inch NURO telescope at the Lowell Observatory in Flagstaff, Arizona in three filters: Bessell B, V, and R. Additional observations were made using the 14-inch Meade telescope at the Truman State Observatory in Kirksville, Missouri using Baader R, G, and B filters. In this paper, we present the light curves for these three objects and generate a truncated eight-term Fourier fit to these light curves. We use the Fourier coefficients from this fit to confirm ASAS J105855+1722.2 and NSVS 5066754 as β Lyrae type systems, and NSVS 9091101 to possibly be a RR Lyrae-type system. We measure the O'Connell effect observed in two of these systems (ASAS J105855+1722.2 and NSVS 5066754), and quantify this effect by calculating the "Light Curve Asymmetry" (LCA) and the "O'Connell Effect Ratio" (OER).
Dai, Cong; Jiang, Min; Sun, Ming-Jun; Cao, Qin
2018-05-01
Fecal immunochemical test (FIT) is a promising marker for assessment of inflammatory bowel disease activity. However, the utility of FIT for predicting mucosal healing (MH) of ulcerative colitis (UC) patients has yet to be clearly demonstrated. The objective of our study was to perform a diagnostic test accuracy test meta-analysis evaluating the diagnostic accuracy of FIT in predicting MH of UC patients. We systematically searched the databases from inception to November 2017 that evaluated MH in UC. The methodological quality of each study was assessed according to the Quality Assessment of Diagnostic Accuracy Studies checklist. The extracted data were pooled using a summary receiver operating characteristic curve model. Random-effects model was used to summarize the diagnostic odds ratio, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Six studies comprising 625 UC patients were included in the meta-analysis. The pooled sensitivity and specificity values for predicting MH in UC were 0.77 (95% confidence interval [CI], 0.72-0.81) and 0.81 (95% CI, 0.76-0.85), respectively. The FIT level had a high rule-in value (positive likelihood ratio, 3.79; 95% CI, 2.85-5.03) and a moderate rule-out value (negative likelihood ratio, 0.26; 95% CI, 0.16-0.43) for predicting MH in UC. The results of the receiver operating characteristic curve analysis (area under the curve, 0.88; standard error of the mean, 0.02) and diagnostic odds ratio (18.08; 95% CI, 9.57-34.13) also revealed improved discrimination for identifying MH in UC with FIT concentration. Our meta-analysis has found that FIT is a simple, reliable non-invasive marker for predicting MH in UC patients. © 2018 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Shi, J; Yang, Y
Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
Aleatory Uncertainty and Scale Effects in Computational Damage Models for Failure and Fragmentation
2014-09-01
larger specimens, small specimens have, on average, higher strengths. Equivalently, because curves for small specimens fall below those of larger...the material strength associated with each realization parameter R in Equation (7), and strength distribution curves associated with multiple...effects in brittle media [58], which applies micromorphological dimensional analysis to obtain a universal curve which closely fits rate-dependent
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
The utility of laboratory animal data in toxicology depends upon the ability to generalize the results quantitatively to humans. To compare the acute behavioral effects of inhaled toluene in humans to those in animals, dose-effect curves were fitted by meta-analysis of published...
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V
2007-02-01
Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.
Coral-Ghanem, Cleusa; Alves, Milton Ruiz
2008-01-01
To evaluate the clinical performance of Monocurve and Bicurve (Soper-McGuire design) rigid gas-permeable contact lens fitting in patients with keratoconus. A prospective and randomized comparative clinical trial was conducted with a minimum follow-up of six months in two groups of 63 patients. One group was fitted with Monocurve contact lenses and the other with Bicurve Soper-McGuire design. Study variables included fluoresceinic pattern of lens-to-cornea fitting relationship, location and morphology of the cone, presence and degree of punctate keratitis and other corneal surface alterations, topographic changes, visual acuity for distance corrected with contact lenses and survival analysis for remaining with the same contact lens design during the study. During the follow-up there was a decrease in the number of eyes with advanced and central cones fitted with Monocurve lenses, and an increase in those fitted with Soper-McGuire design. In the Monocurve group, a flattening of both the steepest and the flattest keratometric curve was observed. In the Soper-McGuire group, a steepening of the flattest keratometric curve and a flattening of the steepest keratometric curve were observed. There was a decrease in best-corrected visual acuity with contact lens in the Monocurve group. Survival analysis for the Monocurve lens was 60.32% and for the Soper-McGuire was 71.43% at a mean follow-up of six months. This study showed that due to the changes observed in corneal topography, the same contact lens design did not provide an ideal fitting for all patients during the follow-up period. The Soper-McGuire lenses had a better performance than the Monocurve lenses in advanced and central keratoconus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F
In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less
NASA Astrophysics Data System (ADS)
Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan
2014-09-01
A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.
Investigation of the Failure Modes in a Metal Matrix Composite under Thermal Cycling
1989-12-01
Material Characteristics. . .......... ... 76 Sectioning and SEN Photograp’... . ........ . 86 Residual Stress Analysis using .TCAN ... ....... 99 i VI...Specimen Fitted with Strain Gages ..... ........... 77 39. Modulus and Poisson’s Ratio versus Thermal Cycles . . 79 1 40 Stress /Strain Curve for Uncycled...Specimen .... ......... 82 1 41. Stress /Strain Curve for Specimen 8 (5250 Cycles) ..... .83 42. Comparison of Uncycled to Cycled Stress /Strain Curves
Tromberg, B.J.; Tsay, T.T.; Berns, M.W.; Svaasand, L.O.; Haskell, R.C.
1995-06-13
Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid. 14 figs.
Tromberg, Bruce J.; Tsay, Tsong T.; Berns, Michael W.; Svaasand, Lara O.; Haskell, Richard C.
1995-01-01
Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid.
Bayesian Analysis of Longitudinal Data Using Growth Curve Models
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.
2007-01-01
Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…
NASA Astrophysics Data System (ADS)
Gentile, G.; Famaey, B.; de Blok, W. J. G.
2011-03-01
We present an analysis of 12 high-resolution galactic rotation curves from The HI Nearby Galaxy Survey (THINGS) in the context of modified Newtonian dynamics (MOND). These rotation curves were selected to be the most reliable for mass modelling, and they are the highest quality rotation curves currently available for a sample of galaxies spanning a wide range of luminosities. We fit the rotation curves with the "simple" and "standard" interpolating functions of MOND, and we find that the "simple" function yields better results. We also redetermine the value of a0, and find a median value very close to the one determined in previous studies, a0 = (1.22 ± 0.33) × 10-8 cm s-2. Leaving the distance as a free parameter within the uncertainty of its best independently determined value leads to excellent quality fits for 75% of the sample. Among the three exceptions, two are also known to give relatively poor fits in Newtonian dynamics plus dark matter. The remaining case (NGC 3198) presents some tension between the observations and the MOND fit, which might, however, be explained by the presence of non-circular motions, by a small distance, or by a value of a0 at the lower end of our best-fit interval, 0.9 × 10-8 cm s-2. The best-fit stellar M/L ratios are generally in remarkable agreement with the predictions of stellar population synthesis models. We also show that the narrow range of gravitational accelerations found to be generated by dark matter in galaxies is consistent with the narrow range of additional gravity predicted by MOND.
[Comparison among various software for LMS growth curve fitting methods].
Han, Lin; Wu, Wenhong; Wei, Qiuxia
2015-03-01
To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.
On the convexity of ROC curves estimated from radiological test results.
Pesce, Lorenzo L; Metz, Charles E; Berbaum, Kevin S
2010-08-01
Although an ideal observer's receiver operating characteristic (ROC) curve must be convex-ie, its slope must decrease monotonically-published fits to empirical data often display "hooks." Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This article aims to identify the practical implications of nonconvex ROC curves and the conditions that can lead to empirical or fitted ROC curves that are not convex. This article views nonconvex ROC curves from historical, theoretical, and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve does not cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any nonconvex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. In general, ROC curve fits that show hooks should be looked on with suspicion unless other arguments justify their presence. 2010 AUR. Published by Elsevier Inc. All rights reserved.
On the convexity of ROC curves estimated from radiological test results
Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.
2010-01-01
Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155
Curve fitting methods for solar radiation data modeling
NASA Astrophysics Data System (ADS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less
NASA Technical Reports Server (NTRS)
Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.
1984-01-01
An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.
Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2
NASA Astrophysics Data System (ADS)
Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.
2017-12-01
In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.
Comparative Evaluation of Two Serial Gene Expression Experiments | Division of Cancer Prevention
Stuart G. Baker, 2014 Introduction This program fits biologically relevant response curves in comparative analysis of the two gene expression experiments involving same genes but under different scenarios and at least 12 responses. The program outputs gene pairs with biologically relevant response curve shapes including flat, linear, sigmoid, hockey stick, impulse and step
Evaluation of the swelling behaviour of iota-carrageenan in monolithic matrix tablets.
Kelemen, András; Buchholcz, Gyula; Sovány, Tamás; Pintye-Hódi, Klára
2015-08-10
The swelling properties of monolithic matrix tablets containing iota-carrageenan were studied at different pH values, with measurements of the swelling force and characterization of the profile of the swelling curve. The swelling force meter was linked to a PC by an RS232 cable and the measured data were evaluated with self-developed software. The monitor displayed the swelling force vs. time curve with the important parameters, which could be fitted with an Analysis menu. In the case of iota-carrageenan matrix tablets, it was concluded that the pH and the pressure did not influence the swelling process, and the first section of the swelling curve could be fitted by the Korsmeyer-Peppas equation. Copyright © 2015 Elsevier B.V. All rights reserved.
Systemic Console: Advanced analysis of exoplanetary data
NASA Astrophysics Data System (ADS)
Meschiari, Stefano; Wolf, Aaron S.; Rivera, Eugenio; Laughlin, Gregory; Vogt, Steve; Butler, Paul
2012-10-01
Systemic Console is a tool for advanced analysis of exoplanetary data. It comprises a graphical tool for fitting radial velocity and transits datasets and a library of routines for non-interactive calculations. Among its features are interactive plotting of RV curves and transits, combined fitting of RV and transit timing (primary and secondary), interactive periodograms and FAP estimation, and bootstrap and MCMC error estimation. The console package includes public radial velocity and transit data.
Gambarota, Giulio; Hitti, Eric; Leporq, Benjamin; Saint-Jalmes, Hervé; Beuf, Olivier
2017-01-01
Tissue perfusion measurements using intravoxel incoherent motion (IVIM) diffusion-MRI are of interest for investigations of liver pathologies. A confounding factor in the perfusion quantification is the partial volume between liver tissue and large blood vessels. The aim of this study was to assess and correct for this partial volume effect in the estimation of the perfusion fraction. MRI experiments were performed at 3 Tesla with a diffusion-MRI sequence at 12 b-values. Diffusion signal decays in liver were analyzed using the non-negative least square (NNLS) method and the biexponential fitting approach. In some voxels, the NNLS analysis yielded a very fast-decaying component that was assigned to partial volume with the blood flowing in large vessels. Partial volume correction was performed by biexponential curve fitting, where the first data point (b = 0 s/mm 2 ) was eliminated in voxels with a very fast-decaying component. Biexponential fitting with partial volume correction yielded parametric maps with perfusion fraction values smaller than biexponential fitting without partial volume correction. The results of the current study indicate that the NNLS analysis in combination with biexponential curve fitting allows to correct for partial volume effects originating from blood flow in IVIM perfusion fraction measurements. Magn Reson Med 77:310-317, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.
Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335
Comparison between two scalar field models using rotation curves of spiral galaxies
NASA Astrophysics Data System (ADS)
Fernández-Hernández, Lizbeth M.; Rodríguez-Meza, Mario A.; Matos, Tonatiuh
2018-04-01
Scalar fields have been used as candidates for dark matter in the universe, from axions with masses ∼ 10-5eV until ultra-light scalar fields with masses ∼ Axions behave as cold dark matter while the ultra-light scalar fields galaxies are Bose-Einstein condensate drops. The ultra-light scalar fields are also called scalar field dark matter model. In this work we study rotation curves for low surface brightness spiral galaxies using two scalar field models: the Gross-Pitaevskii Bose-Einstein condensate in the Thomas-Fermi approximation and a scalar field solution of the Klein-Gordon equation. We also used the zero disk approximation galaxy model where photometric data is not considered, only the scalar field dark matter model contribution to rotation curve is taken into account. From the best-fitting analysis of the galaxy catalog we use, we found the range of values of the fitting parameters: the length scale and the central density. The worst fitting results (values of χ red2 much greater than 1, on the average) were for the Thomas-Fermi models, i.e., the scalar field dark matter is better than the Thomas- Fermi approximation model to fit the rotation curves of the analysed galaxies. To complete our analysis we compute from the fitting parameters the mass of the scalar field models and two astrophysical quantities of interest, the dynamical dark matter mass within 300 pc and the characteristic central surface density of the dark matter models. We found that the value of the central mass within 300 pc is in agreement with previous reported results, that this mass is ≈ 107 M ⊙/pc2, independent of the dark matter model. And, on the contrary, the value of the characteristic central surface density do depend on the dark matter model.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
A microcomputer program for analysis of nucleic acid hybridization data
Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.
1982-01-01
The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017
Study of the influence of Type Ia supernovae environment on the Hubble diagram
NASA Astrophysics Data System (ADS)
Henne, Vincent
2016-06-01
The observational cosmology with distant Type Ia supernovae as standard candles claims that the Universe is in accelerated expansion, caused by a large fraction of dark energy. In this report we investigated SNe Ia environment, studying the impact of the nature of their host galaxies and their distance to the host galactic center on the Hubble diagram fitting. The supernovae used in the analysis were extracted from Joint-Light-curves-Analysis compilation of high-redshift and nearby supernovae. The analysis are based on the empirical fact that SN Ia luminosities depend on their light curve shapes and colors. No conclusive correlation between SN Ia light curve parameters and galocentric distance were identified. Concerning the host morphology, we showed that the stretch parameter of Type Ia supernovae is correlated with the host galaxy type. The supernovae with lower stretch mainly exploded in elliptical and lenticular galaxies. The studies show that into old star population and low dust environment, supernovae are fainter. We did not find any significant correlation between Type Ia supernovae color and host morphology. We confirm that supernova properties depend on their environment and propose to incorporate a host galaxy term into the Hubble diagram fit in the future cosmological analysis.
NASA Astrophysics Data System (ADS)
Repetto, P.; Martínez-García, E. E.; Rosado, M.; Gabbasov, R.
2018-06-01
In this paper, we derive a novel circular velocity relation for a test particle in a 3D gravitational potential applicable to every system of curvilinear coordinates, suitable to be reduced to orthogonal form. As an illustration of the potentiality of the determined circular velocity expression, we perform the rotation curves analysis of UGC 8490 and UGC 9753 and we estimate the total and dark matter mass of these two galaxies under the assumption that their respective dark matter haloes have spherical, prolate, and oblate spheroidal mass distributions. We employ stellar population synthesis models and the total H I density map to obtain the stellar and H I+He+metals rotation curves of both galaxies. The subtraction of the stellar plus gas rotation curves from the observed rotation curves of UGC 8490 and UGC 9753 generates the dark matter circular velocity curves of both galaxies. We fit the dark matter rotation curves of UGC 8490 and UGC 9753 through the newly established circular velocity formula specialized to the spherical, prolate, and oblate spheroidal mass distributions, considering the Navarro, Frenk, and White, Burkert, Di Cintio, Einasto, and Stadel dark matter haloes. Our principal findings are the following: globally, cored dark matter profiles Burkert and Einasto prevail over cuspy Navarro, Frenk, and White, and Di Cintio. Also, spherical/oblate dark matter models fit better the dark matter rotation curves of both galaxies than prolate dark matter haloes.
Cai, Jing; Li, Shan; Zhang, Haixin; Zhang, Shuoxin; Tyree, Melvin T
2014-01-01
Vulnerability curves (VCs) generally can be fitted to the Weibull equation; however, a growing number of VCs appear to be recalcitrant, that is, deviate from a Weibull but seem to fit dual Weibull curves. We hypothesize that dual Weibull curves in Hippophae rhamnoides L. are due to different vessel diameter classes, inter-vessel hydraulic connections or vessels versus fibre tracheids. We used dye staining techniques, hydraulic measurements and quantitative anatomy measurements to test these hypotheses. The fibres contribute 1.3% of the total stem conductivity, which eliminates the hypothesis that fibre tracheids account for the second Weibull curve. Nevertheless, the staining pattern of vessels and fibre tracheids suggested that fibres might function as a hydraulic bridge between adjacent vessels. We also argue that fibre bridges are safer than vessel-to-vessel pits and put forward the concept as a new paradigm. Hence, we tentatively propose that the first Weibull curve may be accounted by vessels connected to each other directly by pit fields, while the second Weibull curve is associated with vessels that are connected almost exclusively by fibre bridges. Further research is needed to test the concept of fibre bridge safety in species that have recalcitrant or normal Weibull curves. © 2013 John Wiley & Sons Ltd.
The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.
van Battum, L J; Huizenga, H
2006-07-01
Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.
ERIC Educational Resources Information Center
Lazar, Ann A.; Zerbe, Gary O.
2011-01-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio
2016-05-19
Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
NASA Astrophysics Data System (ADS)
Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik
2017-11-01
To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.
Testing Modified Newtonian Dynamics with Low Surface Brightness Galaxies: Rotation Curve FITS
NASA Astrophysics Data System (ADS)
de Blok, W. J. G.; McGaugh, S. S.
1998-11-01
We present modified Newtonian dynamics (MOND) fits to 15 rotation curves of low surface brightness (LSB) galaxies. Good fits are readily found, although for a few galaxies minor adjustments to the inclination are needed. Reasonable values for the stellar mass-to-light ratios are found, as well as an approximately constant value for the total (gas and stars) mass-to-light ratio. We show that the LSB galaxies investigated here lie on the one, unique Tully-Fisher relation, as predicted by MOND. The scatter on the Tully-Fisher relation can be completely explained by the observed scatter in the total mass-to-light ratio. We address the question of whether MOND can fit any arbitrary rotation curve by constructing a plausible fake model galaxy. While MOND is unable to fit this hypothetical galaxy, a normal dark-halo fit is readily found, showing that dark matter fits are much less selective in producing fits. The good fits to rotation curves of LSB galaxies support MOND, especially because these are galaxies with large mass discrepancies deep in the MOND regime.
Some applications of categorical data analysis to epidemiological studies.
Grizzle, J E; Koch, G G
1979-01-01
Several examples of categorized data from epidemiological studies are analyzed to illustrate that more informative analysis than tests of independence can be performed by fitting models. All of the analyses fit into a unified conceptual framework that can be performed by weighted least squares. The methods presented show how to calculate point estimate of parameters, asymptotic variances, and asymptotically valid chi 2 tests. The examples presented are analysis of relative risks estimated from several 2 x 2 tables, analysis of selected features of life tables, construction of synthetic life tables from cross-sectional studies, and analysis of dose-response curves. PMID:540590
NASA Astrophysics Data System (ADS)
Lee, Soojin; Cho, Woon Jo; Kim, Yang Do; Kim, Eun Kyu; Park, Jae Gwan
2005-07-01
White-light-emitting Si nanoparticles were prepared from the sodium silicide (NaSi) precursor. The photoluminescence of colloidal Si nanoparticles has been fitted by effective mass approximation (EMA). We analyzed the correlation between experimental photoluminescence and simulated fitting curves. Both the mean diameter and the size dispersion of the white-light-emitting Si nanoparticles were estimated.
Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.; Taylor, Aaron B.
2009-01-01
Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Aigner, Z; Berkesi, O; Farkas, G; Szabó-Révész, P
2012-01-05
The steps of formation of an inclusion complex produced by the co-grinding of gemfibrozil and dimethyl-β-cyclodextrin were investigated by differential scanning calorimetry (DSC), X-ray powder diffractometry (XRPD) and Fourier transform infrared (FTIR) spectroscopy with curve-fitting analysis. The endothermic peak at 59.25°C reflecting the melting of gemfibrozil progressively disappeared from the DSC curves of the products on increase of the duration of co-grinding. The crystallinity of the samples too gradually decreased, and after 35min of co-grinding the product was totally amorphous. Up to this co-grinding time, XRPD and FTIR investigations indicated a linear correlation between the cyclodextrin complexation and the co-grinding time. After co-grinding for 30min, the ratio of complex formation did not increase. These studies demonstrated that co-grinding is a suitable method for the complexation of gemfibrozil with dimethyl-β-cyclodextrin. XRPD analysis revealed the amorphous state of the gemfibrozil-dimethyl-β-cyclodextrin product. FTIR spectroscopy with curve-fitting analysis may be useful as a semiquantitative analytical method for discriminating the molecular and amorphous states of gemfibrozil. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution
NASA Astrophysics Data System (ADS)
Zhao, X.; Suganuma, Y.; Fujii, M.
2017-12-01
Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.
Parameter setting for peak fitting method in XPS analysis of nitrogen in sewage sludge
NASA Astrophysics Data System (ADS)
Tang, Z. J.; Fang, P.; Huang, J. H.; Zhong, P. Y.
2017-12-01
Thermal decomposition method is regarded as an important route to treat increasing sewage sludge, while the high content of N causes serious nitrogen related problems, then figuring out the existing form and content of nitrogen of sewage sludge become essential. In this study, XPSpeak 4.1 was used to investigate the functional forms of nitrogen in sewage sludge, peak fitting method was adopted and the best-optimized parameters were determined. According to the result, the N1s spectra curve can be resolved into 5 peaks: pyridine-N (398.7±0.4eV), pyrrole-N(400.5±0.3eV), protein-N(400.4eV), ammonium-N(401.1±0.3eV) and nitrogen oxide-N(403.5±0.5eV). Based on the the experimental data obtained from elemental analysis and spectrophotometry method, the optimum parameters of curve fitting method were decided: background type: Tougaard, FWHM 1.2, 50% Lorentzian-Gaussian. XPS methods can be used as a practical tool to analysis the nitrogen functional groups of sewage sludge, which can reflect the real content of nitrogen of different forms.
Applications of Computer Graphics in Engineering
NASA Technical Reports Server (NTRS)
1975-01-01
Various applications of interactive computer graphics to the following areas of science and engineering were described: design and analysis of structures, configuration geometry, animation, flutter analysis, design and manufacturing, aircraft design and integration, wind tunnel data analysis, architecture and construction, flight simulation, hydrodynamics, curve and surface fitting, gas turbine engine design, analysis, and manufacturing, packaging of printed circuit boards, spacecraft design.
NASA Astrophysics Data System (ADS)
Zahir, N.; Ali, A.
2015-12-01
The Lake Urmiah has undergone a drastic shrinkage in size over the past few decades. The initial intention of this paper is to present an approach for determining the so called "salient times" during which the trend of the shrinkage process is accelerated or decelerated. To find these salient times, a quasi_continuous curve was optimally fitted to the Topex altimetry data within the period 1998 to 2006. To find the salient points within this period of time, the points of inflections of the fitted curve is computed using a second derivative approach. The water volume was also computed using 16 cloud free Landsat images of the Lake within the periods of 1998 to 2006. In the first stage of the water volume calculation, the pixels of the Lake were segmented using the Automated Water Extraction Index (AWEI) and the shorelines of the Lake were extracted by a boundary detecting operator using the generated binary image of the Lake surface. The water volume fluctuation rate was then computed under the assumption that the two successive Lake surfaces and their corresponding water level differences demonstrate approximately a truncated pyramid. The analysis of the water level fluctuation rates were further extended by a sinusoidal curve fitted to the Topex altimetry data. This curve was intended to model the seasonal fluctuations of the water level. In the final stage of this article, the correlation between the fluctuation rates and the precipitation and temperature variations were also numerically determined. This paper reports in some details the stages mentioned above.
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
Dynamic Analysis of Recalescence Process and Interface Growth of Eutectic Fe82B17Si1 Alloy
NASA Astrophysics Data System (ADS)
Fan, Y.; Liu, A. M.; Chen, Z.; Li, P. Z.; Zhang, C. H.
2018-03-01
By employing the glass fluxing technique in combination with cyclical superheating, the microstructural evolution of the undercooled Fe82B17Si1 alloy in the obtained undercooling range was studied. With increase in undercooling, a transition of cooling curves was detected from one recalescence to two recalescences, followed by one recalescence. The two types of cooling curves were fitted by the break equation and the Johnson-Mehl-Avrami-Kolmogorov model. Based on the cooling curves at different undercoolings, the recalescence rate was calculated by the multi-logistic growth model and the Boettinger-Coriel-Trivedi model. Both the recalescence features and the interface growth kinetics of the eutectic Fe82B17Si1 alloy were explored. The fitting results that were obtained using TEM (SAED), SEM and XRD were consistent with the changing rule of microstructures. Finally, the relationship between the microstructure and hardness was also investigated.
NASA Astrophysics Data System (ADS)
Meng, Xiao; Wang, Lai; Hao, Zhibiao; Luo, Yi; Sun, Changzheng; Han, Yanjun; Xiong, Bing; Wang, Jian; Li, Hongtao
2016-01-01
Efficiency droop is currently one of the most popular research problems for GaN-based light-emitting diodes (LEDs). In this work, a differential carrier lifetime measurement system is optimized to accurately determine carrier lifetimes (τ) of blue and green LEDs under different injection current (I). By fitting the τ-I curves and the efficiency droop curves of the LEDs according to the ABC carrier rate equation model, the impact of Auger recombination and carrier leakage on efficiency droop can be characterized simultaneously. For the samples used in this work, it is found that the experimental τ-I curves cannot be described by Auger recombination alone. Instead, satisfactory fitting results are obtained by taking both carrier leakage and carriers delocalization into account, which implies carrier leakage plays a more significant role in efficiency droop at high injection level.
ROC analysis of diagnostic performance in liver scintigraphy.
Fritz, S L; Preston, D F; Gallagher, J H
1981-02-01
Studies on the accuracy of liver scintigraphy for the detection of metastases were assembled from 38 sources in the medical literature. An ROC curve was fitted to the observed values of sensitivity and specificity using an algorithm developed by Ogilvie and Creelman. This ROC curve fitted the data better than average sensitivity and specificity values in each of four subsets of the data. For the subset dealing with Tc-99m sulfur colloid scintigraphy, performed for detection of suspected metastases and containing data on 2800 scans from 17 independent series, it was not possible to reject the hypothesis that interobserver variation was entirely due to the use of different decision thresholds by the reporting clinicians. Thus the ROC curve obtained is a reasonable baseline estimate of the performance potentially achievable in today's clinical setting. Comparison of new reports with these data is possible, but is limited by the small sample sizes in most reported series.
Goodford, P J; St-Louis, J; Wootton, R
1978-01-01
1. Oxygen dissociation curves have been measured for human haemoglobin solutions with different concentrations of the allosteric effectors 2,3-diphosphoglycerate, adenosine triphosphate and inositol hexaphosphate. 2. Each effector produces a concentration dependent right shift of the oxygen dissociation curve, but a point is reached where the shift is maximal and increasing the effector concentration has no further effect. 3. Mathematical models based on the Monod, Wyman & Changeux (1965) treatment of allosteric proteins have been fitted to the data. For each compound the simple two-state model and its extension to take account of subunit inequivalence were shown to be inadequate, and a better fit was obtained by allowing the effector to lower the oxygen affinity of the deoxy conformational state as well as binding preferentially to this conformation. PMID:722582
Bartel, Thomas W.; Yaniv, Simone L.
1997-01-01
The 60 min creep data from National Type Evaluation Procedure (NTEP) tests performed at the National Institute of Standards and Technology (NIST) on 65 load cells have been analyzed in order to compare their creep and creep recovery responses, and to compare the 60 min creep with creep over shorter time periods. To facilitate this comparison the data were fitted to a multiple-term exponential equation, which adequately describes the creep and creep recovery responses of load cells. The use of such a curve fit reduces the effect of the random error in the indicator readings on the calculated values of the load cell creep. Examination of the fitted curves show that the creep recovery responses, after inversion by a change in sign, are generally similar in shape to the creep response, but smaller in magnitude. The average ratio of the absolute value of the maximum creep recovery to the maximum creep is 0.86; however, no reliable correlation between creep and creep recovery can be drawn from the data. The fitted curves were also used to compare the 60 min creep of the NTEP analysis with the 30 min creep and other parameters calculated according to the Organization Internationale de Métrologie Légale (OIML) R 60 analysis. The average ratio of the 30 min creep value to the 60 min value is 0.84. The OIML class C creep tolerance is less than 0.5 of the NTEP tolerance for classes III and III L. PMID:27805151
[Experimental study and correction of the absorption and enhancement effect between Ti, V and Fe].
Tuo, Xian-Guo; Mu, Ke-Liang; Li, Zhe; Wang, Hong-Hui; Luo, Hui; Yang, Jian-Bo
2009-11-01
The absorption and enhancement effects in X-ray fluorescence analysis for Ti, V and Fe elements were studied in the present paper. Three bogus duality systems of Ti-V/Ti-Fe/V-Fe samples were confected and measured by X-ray fluorescence analysis technique using HPGe semiconductor detector, and the relation curve between unitary coefficient (R(K)) of element count rate and element content (W(K)) were obtained after the experiment. Having analyzed the degree of absorption and enhancement effect between every two elements, the authors get the result, and that is the absorption and enhancement effect between Ti and V is relatively distinctness, while it's not so distinctness in Ti-Fe and V-Fe. After that, a mathematics correction method of exponential fitting was used to fit the R(K)-W(K) curve and get a function equation of X-ray fluorescence count rate and content. Three groups of Ti-V duality samples were used to test the fitting method and the relative errors of Ti and V were less than 0.2% as compared to the actual results.
Analysis of Learning Curve Fitting Techniques.
1987-09-01
1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Toward a Micro-Scale Acoustic Direction-Finding Sensor with Integrated Electronic Readout
2013-06-01
measurements with curve fits . . . . . . . . . . . . . . . 20 Figure 2.10 Failure testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22...2.1 Sensor parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Table 2.2 Curve fit parameters...elastic, the quantity of interest is the elastic stiffness. In a typical nanoindentation test, the loading curve is nonlinear due to combined plastic
Transonic Compressor: Program System TXCO for Data Acquisition and On-Line Reduction.
1980-10-01
IMONIDAYIYEARIHOUR,IMINISEC) OS16 C ............................................................... (0S17 C 0SiB C Gel dole ond line and convert the...linear curve fits SECON real intercept of linear curve fit (as from CURVE) 65 - . FLOW CHART SUBROUTINE CALIB - - - Aso C’A / oonre& *Go wSAt*irc
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Kepler Uniform Modeling of KOIs: MCMC Notes for Data Release 25
NASA Technical Reports Server (NTRS)
Hoffman, Kelsey L.; Rowe, Jason F.
2017-01-01
This document describes data products related to the reported planetary parameters and uncertainties for the Kepler Objects of Interest (KOIs) based on a Markov-Chain-Monte-Carlo (MCMC) analysis. Reported parameters, uncertainties and data products can be found at the NASA Exoplanet Archive . The codes used for this data analysis are available on the Github website (Rowe 2016). The relevant paper for details of the calculations is Rowe et al. (2015). The main differences between the model fits discussed here and those in the DR24 catalogue are that the DR25 light curves were used in the analysis, our processing of the MAST light curves took into account different data flags, the number of chains calculated was doubled to 200 000, and the parameters which are reported are based on a damped least-squares fit, instead of the median value from the Markov chain or the chain with the lowest 2 as reported in the past.
Edge detection and mathematic fitting for corneal surface with Matlab software.
Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na
2017-01-01
To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.
NASA Astrophysics Data System (ADS)
Pang, Liping; Goltz, Mark; Close, Murray
2003-01-01
In this note, we applied the temporal moment solutions of [Das and Kluitenberg, 1996. Soil Sci. Am. J. 60, 1724] for one-dimensional advective-dispersive solute transport with linear equilibrium sorption and first-order degradation for time pulse sources to analyse soil column experimental data. Unlike most other moment solutions, these solutions consider the interplay of degradation and sorption. This permits estimation of a first-order degradation rate constant using the zeroth moment of column breakthrough data, as well as estimation of the retardation factor or sorption distribution coefficient of a degrading solute using the first moment. The method of temporal moment (MOM) formulae was applied to analyse breakthrough data from a laboratory column study of atrazine, hexazinone and rhodamine WT transport in volcanic pumice sand, as well as experimental data from the literature. Transport and degradation parameters obtained using the MOM were compared to parameters obtained by fitting breakthrough data from an advective-dispersive transport model with equilibrium sorption and first-order degradation, using the nonlinear least-square curve-fitting program CXTFIT. The results derived from using the literature data were also compared with estimates reported in the literature using different equilibrium models. The good agreement suggests that the MOM could provide an additional useful means of parameter estimation for transport involving equilibrium sorption and first-order degradation. We found that the MOM fitted breakthrough curves with tailing better than curve fitting. However, the MOM analysis requires complete breakthrough curves and relatively frequent data collection to ensure the accuracy of the moments obtained from the breakthrough data.
A Century of Enzyme Kinetic Analysis, 1913 to 2013
Johnson, Kenneth A.
2013-01-01
This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. PMID:23850893
Materials and Modulators for 3D Displays
2002-08-01
1243 nm. 0, 180 and 360 deg. in this plot correspond to parallel polarization. The dashed curve is a cos2(θ) fit to the data with a constant value...dwell time (solid bold curve ), 10 µs dwell time (dashed bold curve ) and static case (thin dashed curve ). 26 Figure. 20. Schematics of free-space...photon. The two peaks in the two photon spectrum can be fit by two Lorentzian curves . These spectra indicate that in the rhodamine B molecule the
Rotation State Evolution of Retired Geosynchronous Satellites
NASA Astrophysics Data System (ADS)
Benson, C.; Scheeres, D. J.; Ryan, W. H.; Ryan, E. V.; Moskovitz, N.
Non-periodic light curve rotation state analysis is conducted for the retired geosynchronous satellite GOES 8. This particular satellite has been observed periodically at the Maui Research and Technology Center as well as Magdalena Ridge and Lowell Observatories since 2013. To extract tumbling periods from the light curves, twodimensional Fourier series fits were used. Torque-free dynamics and the satellite’s known mass properties were then leveraged to constrain the candidate periods. Finally, simulated light curves were generated using a representative shape model for further validation. Analysis of the light curves suggests that GOES 8 transitioned from uniform rotation in 2014 to continually evolving tumbling motion by 2016. These findings are consistent with previous dynamical simulations and support the hypothesis that the Yarkovsky-O’Keefe-Radzievskii-Paddack (YORP) effect drives rotation state evolution of retired geosynchronous satellites.
Soil Water Characteristics of Cores from Low- and High-Centered Polygons, Barrow, Alaska, 2012
Graham, David; Moon, Ji-Won
2016-08-22
This dataset includes soil water characteristic curves for soil and permafrost in two representative frozen cores collected from a high-center polygon (HCP) and a low-center polygon (LCP) from the Barrow Environmental Observatory. Data include soil water content and soil water potential measured using the simple evaporation method for hydrological and biogeochemical simulations and experimental data analysis. Data can be used to generate a soil moisture characteristic curve, which can be fit to a variety of hydrological functions to infer critical parameters for soil physics. Considering the measured the soil water properties, the van Genuchten model predicted well the HCP, in contrast, the Kosugi model well fitted LCP which had more saturated condition.
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
Tensile stress-strain behavior of graphite/epoxy laminates
NASA Technical Reports Server (NTRS)
Garber, D. P.
1982-01-01
The tensile stress-strain behavior of a variety of graphite/epoxy laminates was examined. Longitudinal and transverse specimens from eleven different layups were monotonically loaded in tension to failure. Ultimate strength, ultimate strain, and strss-strain curves wee obtained from four replicate tests in each case. Polynominal equations were fitted by the method of least squares to the stress-strain data to determine average curves. Values of Young's modulus and Poisson's ratio, derived from polynomial coefficients, were compared with laminate analysis results. While the polynomials appeared to accurately fit the stress-strain data in most cases, the use of polynomial coefficients to calculate elastic moduli appeared to be of questionable value in cases involving sharp changes in the slope of the stress-strain data or extensive scatter.
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe givesmore » a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.« less
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
The prediction of acoustical particle motion using an efficient polynomial curve fit procedure
NASA Technical Reports Server (NTRS)
Marshall, S. E.; Bernhard, R.
1984-01-01
A procedure is examined whereby the acoustic model parameters, natural frequencies and mode shapes, in the cavities of transportation vehicles are determined experimentally. The acoustic model shapes are described in terms of the particle motion. The acoustic modal analysis procedure is tailored to existing minicomputer based spectral analysis systems.
Investigating Convergence Patterns for Numerical Methods Using Data Analysis
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2013-01-01
The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
Calculating Galactic Distances Through Supernova Light Curve Analysis (Abstract)
NASA Astrophysics Data System (ADS)
Glanzer, J.
2018-06-01
(Abstract only) The purpose of this project is to experimentally determine the distance to the galaxy M101 by using data that were taken on the type Ia supernova SN 2011fe at the Paul P. Feder Observatory. Type Ia supernovae are useful for determining distances in astronomy because they all have roughly the same luminosity at the peak of their outburst. Comparing the apparent magnitude to the absolute magnitude allows a measurement of the distance. The absolute magnitude is estimated in two ways: using an empirical relationship from the literature between the rate of decline and the absolute magnitude, and using sncosmo, a PYTHON package used for supernova light curve analysis that fits model light curves to the photometric data.
NASA Astrophysics Data System (ADS)
Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui
2018-04-01
Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.
2017-11-01
sent from light-emitting diodes (LEDs) of 5 colors ( green , red, white, amber, and blue). Experiment 1 involved controlled laboratory measurements of...A-4 Red LED calibration curves and quadratic curve fits with R2 values . 37 Fig. A-5 Green LED calibration curves and quadratic curve fits with R2...36 Table A-4 Red LED calibration measurements ................................................... 36 Table A-5 Green LED
Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece
Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.
2008-10-14
Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
Lo, Po-Han; Tsou, Mei-Yung; Chang, Kuang-Yi
2015-09-01
Patient-controlled epidural analgesia (PCEA) is commonly used for pain relief after total knee arthroplasty (TKA). This study aimed to model the trajectory of analgesic demand over time after TKA and explore its influential factors using latent curve analysis. Data were retrospectively collected from 916 patients receiving unilateral or bilateral TKA and postoperative PCEA. PCEA demands during 12-hour intervals for 48 hours were directly retrieved from infusion pumps. Potentially influential factors of PCEA demand, including age, height, weight, body mass index, sex, and infusion pump settings, were also collected. A latent curve analysis with 2 latent variables, the intercept (baseline) and slope (trend), was applied to model the changes in PCEA demand over time. The effects of influential factors on these 2 latent variables were estimated to examine how these factors interacted with time to alter the trajectory of PCEA demand over time. On average, the difference in analgesic demand between the first and second 12-hour intervals was only 15% of that between the first and third 12-hour intervals. No significant difference in PCEA demand was noted between the third and fourth 12-hour intervals. Aging tended to decrease the baseline PCEA demand but body mass index and infusion rate were positively correlated with the baseline. Only sex significantly affected the trend parameter and male individuals tended to have a smoother decreasing trend of analgesic demands over time. Patients receiving bilateral procedures did not consume more analgesics than their unilateral counterparts. Goodness of fit analysis indicated acceptable model fit to the observed data. Latent curve analysis provided valuable information about how analgesic demand after TKA changed over time and how patient characteristics affected its trajectory.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
Using the MWC model to describe heterotropic interactions in hemoglobin
Rapp, Olga
2017-01-01
Hemoglobin is a classical model allosteric protein. Research on hemoglobin parallels the development of key cooperativity and allostery concepts, such as the ‘all-or-none’ Hill formalism, the stepwise Adair binding formulation and the concerted Monod-Wymann-Changuex (MWC) allosteric model. While it is clear that the MWC model adequately describes the cooperative binding of oxygen to hemoglobin, rationalizing the effects of H+, CO2 or organophosphate ligands on hemoglobin-oxygen saturation using the same model remains controversial. According to the MWC model, allosteric ligands exert their effect on protein function by modulating the quaternary conformational transition of the protein. However, data fitting analysis of hemoglobin oxygen saturation curves in the presence or absence of inhibitory ligands persistently revealed effects on both relative oxygen affinity (c) and conformational changes (L), elementary MWC parameters. The recent realization that data fitting analysis using the traditional MWC model equation may not provide reliable estimates for L and c thus calls for a re-examination of previous data using alternative fitting strategies. In the current manuscript, we present two simple strategies for obtaining reliable estimates for MWC mechanistic parameters of hemoglobin steady-state saturation curves in cases of both evolutionary and physiological variations. Our results suggest that the simple MWC model provides a reasonable description that can also account for heterotropic interactions in hemoglobin. The results, moreover, offer a general roadmap for successful data fitting analysis using the MWC model. PMID:28793329
Enrollment Projection within a Decision-Making Framework.
ERIC Educational Resources Information Center
Armstrong, David F.; Nunley, Charlene Wenckowski
1981-01-01
Two methods used to predict enrollment at Montgomery College in Maryland are compared and evaluated, and the administrative context in which they are used is considered. The two methods involve time series analysis (curve fitting) and indicator techniques (yield from components). (MSE)
FIT-MART: Quantum Magnetism with a Gentle Learning Curve
NASA Astrophysics Data System (ADS)
Engelhardt, Larry; Garland, Scott C.; Rainey, Cameron; Freeman, Ray A.
We present a new open-source software package, FIT-MART, that allows non-experts to quickly get started sim- ulating quantum magnetism. FIT-MART can be downloaded as a platform-idependent executable Java (JAR) file. It allows the user to define (Heisenberg) Hamiltonians by electronically drawing pictures that represent quantum spins and operators. Sliders are automatically generated to control the values of the parameters in the model, and when the values change, several plots are updated in real time to display both the resulting energy spectra and the equilibruim magnetic properties. Several experimental data sets for real magnetic molecules are included in FIT-MART to allow easy comparison between simulated and experimental data, and FIT-MART users can also import their own data for analysis and compare the goodness of fit for different models.
NASA Astrophysics Data System (ADS)
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less
Description Of Scoliotic Deformity Pattern By Harmonic Functions
NASA Astrophysics Data System (ADS)
Drerup, Burkhard; Hierholzer, Eberhard
1989-04-01
Frontal radiographs of scoliotic deformity of the spine reveal a characteristic pattern of lateral deviation, lateral tilt and axial rotation of vertebrae. In order to study interrelations between deformation parameters 478 radiographs of idiopathic scolioses, 23 of scolioses after Wilms-tumor treatment and 18 of scolioses following poliomyelitis were digitized. From these the curves of lateral deviation, tilt and rotation are calculated and fitted by Fourier series. By restriction to the first harmonic, analysis reduces to the analysis of a single phase and amplitude for each curve. Justification of this simplification will be discussed. Results provide a general geometric description of scoliotic deformity.
A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.
O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A
2015-02-01
Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.
Can Tooth Preparation Design Affect the Fit of CAD/CAM Restorations?
Roperto, Renato Cassio; Oliveira, Marina Piolli; Porto, Thiago Soares; Ferreira, Lais Alaberti; Melo, Lucas Simino; Akkus, Anna
2017-03-01
The purpose of this study was to evaluate if the marginal fit of computer-aided design and computer-aided manufacturing (CAD/CAM) restorations manufactured with CAD/CAM systems can be affected by different tooth preparation designs. Twenty-six typodont (plastic) teeth were divided into two groups (n = 13) according to the occlusal curvature of the tooth preparation. These were the group 1 (control group) (flat occlusal design) and group 2 (curved occlusal design). Scanning of the preparations was performed, and crowns were milled using ceramic blocks. Blocks were cemented using epoxy glue on the pulpal floor only, and finger pressure was applied for 1 minute. On completion of the cementation step, poor fits between the restoration and abutment were measured by microphotography and the silicone replica technique using light-body silicon material on mesial, distal, buccal, and lingual surfaces. Two-way ANOVA analysis did not reveal a statistical difference between flat (83.61 ± 50.72) and curved (79.04 ± 30.97) preparation designs. Buccal, mesial, lingual, and distal sites on the curved design preparation showed less of a gap when compared with flat design. No difference was found on flat preparations among mesial, buccal, and distal sites (P < .05). The lingual aspect had no difference from the distal side but showed a statistically significant difference from mesial and buccal (P < .05). Difference in occlusal design did not significantly impact the marginal fit. Marginal fit was significantly affected by the location of the margin; lingual and distal locations exhibited greater margin gap values compared with buccal and mesial sites regardless of the preparation design.
Ten years in the library: new data confirm paleontological patterns
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)
1993-01-01
A comparison is made between compilations of times of origination and extinction of fossil marine animal families published in 1982 and 1992. As a result of ten years of library research, half of the information in the compendia has changed: families have been added and deleted, low-resolution stratigraphic data been improved, and intervals of origination and extinction have been altered. Despite these changes, apparent macroevolutionary patterns for the entire marine fauna have remained constant. Diversity curves compiled from the two data bases are very similar, with a goodness-of-fit of 99%; the principal difference is that the 1992 curve averages 13% higher than the older curve. Both numbers and percentages of origination and extinction also match well, with fits ranging from 83% to 95%. All major events of radiation and extinction are identical. Therefore, errors in large paleontological data bases and arbitrariness of included taxa are not necessarily impediments to the analysis of pattern in the fossil record, so long as the data are sufficiently numerous.
Biological growth functions describe published site index curves for Lake States timber species.
Allen L. Lundgren; William A. Dolid
1970-01-01
Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...
A new methodology for free wake analysis using curved vortex elements
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Teske, Milton E.; Quackenbush, Todd R.
1987-01-01
A method using curved vortex elements was developed for helicopter rotor free wake calculations. The Basic Curve Vortex Element (BCVE) is derived from the approximate Biot-Savart integration for a parabolic arc filament. When used in conjunction with a scheme to fit the elements along a vortex filament contour, this method has a significant advantage in overall accuracy and efficiency when compared to the traditional straight-line element approach. A theoretical and numerical analysis shows that free wake flows involving close interactions between filaments should utilize curved vortex elements in order to guarantee a consistent level of accuracy. The curved element method was implemented into a forward flight free wake analysis, featuring an adaptive far wake model that utilizes free wake information to extend the vortex filaments beyond the free wake regions. The curved vortex element free wake, coupled with this far wake model, exhibited rapid convergence, even in regions where the free wake and far wake turns are interlaced. Sample calculations are presented for tip vortex motion at various advance ratios for single and multiple blade rotors. Cross-flow plots reveal that the overall downstream wake flow resembles a trailing vortex pair. A preliminary assessment shows that the rotor downwash field is insensitive to element size, even for relatively large curved elements.
NASA Astrophysics Data System (ADS)
Mattei, G.; Ahluwalia, A.
2018-04-01
We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
Structural-Vibration-Response Data Analysis
NASA Technical Reports Server (NTRS)
Smith, W. R.; Hechenlaible, R. N.; Perez, R. C.
1983-01-01
Computer program developed as structural-vibration-response data analysis tool for use in dynamic testing of Space Shuttle. Program provides fast and efficient time-domain least-squares curve-fitting procedure for reducing transient response data to obtain structural model frequencies and dampings from free-decay records. Procedure simultaneously identifies frequencies, damping values, and participation factors for noisy multiple-response records.
Effect Size Measure and Analysis of Single Subject Designs
ERIC Educational Resources Information Center
Society for Research on Educational Effectiveness, 2013
2013-01-01
One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been advanced involves, in general, the fitting of regression lines (or curves) to the set of observations within each phase of the design and comparing the parameters of these…
Analysis of Hanle-effect signals observed in Si-channel spin accumulation devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takamura, Yota, E-mail: takamura@spin.pe.titech.ac.jp; Department of Physical Electronics, Tokyo Institute of Technology, 2-12-1, Ookayama, Meguro-ku, Tokyo 152-8552; Akushichi, Taiju
2014-05-07
We reexamined curve-fitting analysis for spin-accumulation signals observed in Si-channel spin-accumulation devices, employing widely-used Lorentz functions and a new formula developed from the spin diffusion equation. A Si-channel spin-accumulation device with a high quality ferromagnetic spin injector was fabricated, and its observed spin-accumulation signals were verified. Experimentally obtained Hanle-effect signals for spin accumulation were not able to be fitted by a single Lorentz function and were reproduced by the newly developed formula. Our developed formula can represent spin-accumulation signals and thus analyze Hanle-effect signals.
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
NASA Astrophysics Data System (ADS)
Askarimarnani, Sara; Willgoose, Garry; Fityus, Stephen
2017-04-01
Coal seam gas (CSG) is a form of natural gas that occurs in some coal seams. Coal seams have natural fractures with dual-porosity systems and low permeability. In the CSG industry, hydraulic fracturing is applied to increase the permeability and extract the gas more efficiently from the coal seam. The industry claims that it can design fracking patterns. Whether this is true or not, the public (and regulators) requires assurance that once a well has been fracked that the fracking has occurred according to plan and that the fracked well is safe. Thus defensible post-fracking testing methodologies for gas generating wells are required. In 2009 a fracked well HB02, owned by AGL, near Broke, NSW, Australia was subjected to "traditional" water pump-testing as part of this assurance process. Interpretation with well Type Curves and simple single phase (i.e. only water, no gas) highlighted deficiencies in traditional water well approaches with a systemic deviation from the qualitative characteristic of well drawdown curves (e.g. concavity versus convexity of drawdown with time). Accordingly a multiphase (i.e. water and methane) model of the well was developed and compared with the observed data. This paper will discuss the results of this multiphase testing using the TOUGH2 model and its EOS7C constitutive model. A key objective was to test a methodology, based on GLUE monte-carlo calibration technique, to calibrate the characteristics of the frack using the well test drawdown curve. GLUE involves a sensitivity analysis of how changes in the fracture properties change the well hydraulics through and analysis of the drawdown curve and changes in the cone of depression. This was undertaken by changing the native coal, fracture, and gas parameters to see how changing those parameters changed the match between simulations and the observed well drawdown. Results from the GLUE analysis show how much information is contained in the well drawdown curve for estimating field scale coal and gas generation properties, the fracture geometry, and the proponent characteristics. The results with the multiphase model show a better match to the drawdown than using a single phase model but the differences between the best fit drawdowns were small, and smaller than the difference between the best fit and field data. However, the parameters derived to generate these best fits for each model were very different. We conclude that while satisfactory fits with single phase groundwater models (e.g. MODFLOW, FEFLOW) can be achieved the parameters derived will not be realistic, with potential implications for drawdowns and water yields for gas field modelling. Multiphase models are thus required and we will discuss some of the limitations of TOUGH2 for the CSG problem.
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Magalhaes, A. M.; Coyne, G. V.
1995-01-01
We study the dust in the Small Magellanic Cloud using our polarization and extinction data (Paper 1) and existing dust models. The data suggest that the monotonic SMC extinction curve is related to values of lambda(sub max), the wavelength of maximum polarization, which are on the average smaller than the mean for the Galaxy. On the other hand, AZV 456, a star with an extinction similar to that for the Galaxy, shows a value of lambda(sub max) similar to the mean for the Galaxy. We discuss simultaneous dust model fits to extinction and polarization. Fits to the wavelength dependent polarization data are possible for stars with small lambda(sub max). In general, they imply dust size distributions which are narrower and have smaller mean sizes compared to typical size distributions for the Galaxy. However, stars with lambda(sub max) close to the Galactic norm, which also have a narrower polarization curve, cannot be fit adequately. This holds true for all of the dust models considered. The best fits to the extinction curves are obtained with a power law size distribution by assuming that the cylindrical and spherical silicate grains have a volume distribution which is continuous from the smaller spheres to the larger cylinders. The size distribution for the cylinders is taken from the fit to the polarization. The 'typical', monotonic SMC extinction curve can be fit well with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grain. However, amorphous carbon and silicate grains also fit the data well. AZV456, which has an extinction curve similar to that for the Galaxy, has a UV bump which is too blue to be fit by spherical graphite grains.
Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.
2016-09-17
Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North Americanmore » datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.
Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North Americanmore » datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate.« less
NASA Astrophysics Data System (ADS)
Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu
2018-03-01
In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le, K. C.; Tran, T. M.; Langer, J. S.
The statistical-thermodynamic dislocation theory developed in previous papers is used here in an analysis of high-temperature deformation of aluminum and steel. Using physics-based parameters that we expect theoretically to be independent of strain rate and temperature, we are able to fit experimental stress-strain curves for three different strain rates and three different temperatures for each of these two materials. Here, our theoretical curves include yielding transitions at zero strain in agreement with experiment. We find that thermal softening effects are important even at the lowest temperatures and smallest strain rates.
2011-09-01
with the bilinear plasticity relation. We used the bilinear relation, which allowed a full range of hardening from isotropic to kinematic to be...43 Table 12. Verification of the Weight Function Method for Single Corner Crack at a Hole in an Infinite ...determine the “Young’s Modulus,” or the slope of the linear region of the curve, the experimental data is curve fit with
Undiagnosed Small Fiber Polyneuropathy: Is It a Component of Gulf War Illness?
2011-07-01
After informed consent, a site (10 cm above the ankle ) is anesthetized and one or two 2- or 3mm diameter skin punches are removed using sterile...results anchor the lower end of the normal biopsy curve from which the multivariate analysis is derived. Thus, their biopsies will remain part of the...the findings in the young adult subjects, and also anchor the lower end of the neurite density curve, thus providing a more accurate normative fit
Prediction Analysis for Measles Epidemics
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi; Olsen, Lars Folke; Kobayashi, Nobumichi
2003-12-01
A newly devised procedure of prediction analysis, which is a linearized version of the nonlinear least squares method combined with the maximum entropy spectral analysis method, was proposed. This method was applied to time series data of measles case notification in several communities in the UK, USA and Denmark. The dominant spectral lines observed in each power spectral density (PSD) can be safely assigned as fundamental periods. The optimum least squares fitting (LSF) curve calculated using these fundamental periods can essentially reproduce the underlying variation of the measles data. An extension of the LSF curve can be used to predict measles case notification quantitatively. Some discussions including a predictability of chaotic time series are presented.
An Algorithm for Protein Helix Assignment Using Helix Geometry
Cao, Chen; Xu, Shutan; Wang, Lincong
2015-01-01
Helices are one of the most common and were among the earliest recognized secondary structure elements in proteins. The assignment of helices in a protein underlies the analysis of its structure and function. Though the mathematical expression for a helical curve is simple, no previous assignment programs have used a genuine helical curve as a model for helix assignment. In this paper we present a two-step assignment algorithm. The first step searches for a series of bona fide helical curves each one best fits the coordinates of four successive backbone Cα atoms. The second step uses the best fit helical curves as input to make helix assignment. The application to the protein structures in the PDB (protein data bank) proves that the algorithm is able to assign accurately not only regular α-helix but also 310 and π helices as well as their left-handed versions. One salient feature of the algorithm is that the assigned helices are structurally more uniform than those by the previous programs. The structural uniformity should be useful for protein structure classification and prediction while the accurate assignment of a helix to a particular type underlies structure-function relationship in proteins. PMID:26132394
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
NASA Astrophysics Data System (ADS)
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
Photometric Analysis of Eclipsing Binary Az Vir
NASA Astrophysics Data System (ADS)
Neugarten, Andrew; Akiba, Tatsuya; Gokhale, Vayujeet
2018-06-01
We present photometric analysis of the eclipsing binary star system Az Vir. Standard BVR filter data were obtained using the 17-inch PlaneWave Instruments CDK telescope at the Truman State University Observatory in Kirksville, Mo and the 31-inch NURO telescope at the Lowell Observatory complex in Flagstaff, AZ. We apply an eight-term truncated Fourier fit to the light curves generated from these data to confirm the classification of Az Vir as a W Ursae Majoris-type eclipsing variable, using criteria specified by Rucinski (1997). We also calculate the values for the O’Connell Effect Ratio (OER) and the Light Curve Asymmetry (LCA) to quantify the asymmetry in the BVR light curves. In addition, we use data provided by the SuperWASP mission to perform long term O-C (observed minus calculated) analysis on the system to determine if and how its period is changing.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.
Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.
Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J
2010-12-01
Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.
On the reduction of occultation light curves. [stellar occultations by planets
NASA Technical Reports Server (NTRS)
Wasserman, L.; Veverka, J.
1973-01-01
The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.
An Apparatus for Sizing Particulate Matter in Solid Rocket Motors.
1984-06-01
accurately measured. A curve for sizing polydispersions was presented which was used by Cramer and Hansen [Refs. 2, 12]. Two phase flow losses are often...Concentration...... 54 18. 5 Micron Polystyrene, Curve Fit .......... ... 55 19. 5 Micron Polystyrene, Two Angle Method ........ .56.... 20. 10 Micron...Polystyrene, Curve Fit .. ........ 57....[57 21. 10 Micron Polystyrene, Two Angle Method .. ....... .58 . . .6_ *22. 20J Mizron P3iystvrene Cu. .Fi
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping
2003-05-01
In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.
A century of enzyme kinetic analysis, 1913 to 2013.
Johnson, Kenneth A
2013-09-02
This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Space-Based Observation Technology
2000-10-01
Conan, V. Michau, and S. Salem . Regularized multiframe myopic deconvolution from wavefront sensing. In Propagation through the Atmosphere III...specified false alarm rate PFA . Proceeding with curving fitting, one obtains a best-fit curve “10.1y14.2 - 0.2” as the detector for the target
Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C
2017-08-01
Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over time.
Evan Brooks; Valerie Thomas; Wynne Randolph; John Coulston
2012-01-01
With the advent of free Landsat data stretching back decades, there has been a surge of interest in utilizing remotely sensed data in multitemporal analysis for estimation of biophysical parameters. Such analysis is confounded by cloud cover and other image-specific problems, which result in missing data at various aperiodic times of the year. While there is a wealth...
Białek, Marianna
2015-05-01
Physiotherapy for stabilization of idiopathic scoliosis angle in growing children remains controversial. Notably, little data on effectiveness of physiotherapy in children with Early Onset Idiopathic Scoliosis (EOIS) has been published.The aim of this study was to check results of FITS physiotherapy in a group of children with EOIS.The charts of the patients archived in a prospectively collected database were retrospectively reviewed. The inclusion criteria were:diagnosis of EOIS based on spine radiography, age below 10 years, both girls and boys, Cobb angle between 118 and 308, Risser zero, FITS therapy, no other treatment (bracing), and a follow-up at least 2 years from the initiation of the treatment. The criterion for curve progression were as follows: the Cobb angle increase of 68 or more, for curve stabilization; the Cobb angle was 58 comparing to the initial radiograph,for curve correction; and the Cobb angle decrease of 68 or more at the final follow-up radiograph.There were 41 children with EOIS, 36 girls and 5 boys, mean age 7.71.3 years (range 4 to 9 years) who started FITS therapy. The curve pattern was single thoracic (5 children), single thoracolumbar (22 children) or double thoracic/thoracolumbar (14 children), totally 55 structural curvatures. The minimum follow-up was 2 years after initiation of the FITS treatment, maximum was 16 years, mean 4.8 years). At follow-up the mean age was 12.53.4 years. Out of 41 children, 10 passed pubertal growth spurt at the final follow-up and 31 were still immature and continued FITS therapy. Out of 41 children, 27 improved, 13 were stable, and one progressed. Out of 55 structural curves, 32 improved, 22 were stable and one progressed. For the 55 structural curves, the Cobb angle significantly decreased from 18.085.48 at first assessment to 12.586.38 at last evaluation,p<0.0001, paired t-test. The angle of trunk rotation decreased significantly from 4.782.98 to 3.282.58 at last evaluation, p<0.0001,paired t-test.FITS physiotherapy was effective in preventing curve progression in children with EOIS. Final postpubertal follow-up data is needed.
A Statistical Approach to Identify Superluminous Supernovae and Probe Their Diversity
NASA Astrophysics Data System (ADS)
Inserra, C.; Prajs, S.; Gutierrez, C. P.; Angus, C.; Smith, M.; Sullivan, M.
2018-02-01
We investigate the identification of hydrogen-poor superluminous supernovae (SLSNe I) using a photometric analysis, without including an arbitrary magnitude threshold. We assemble a homogeneous sample of previously classified SLSNe I from the literature, and fit their light curves using Gaussian processes. From the fits, we identify four photometric parameters that have a high statistical significance when correlated, and combine them in a parameter space that conveys information on their luminosity and color evolution. This parameter space presents a new definition for SLSNe I, which can be used to analyze existing and future transient data sets. We find that 90% of previously classified SLSNe I meet our new definition. We also examine the evidence for two subclasses of SLSNe I, combining their photometric evolution with spectroscopic information, namely the photospheric velocity and its gradient. A cluster analysis reveals the presence of two distinct groups. “Fast” SLSNe show fast light curves and color evolution, large velocities, and a large velocity gradient. “Slow” SLSNe show slow light curve and color evolution, small expansion velocities, and an almost non-existent velocity gradient. Finally, we discuss the impact of our analyses in the understanding of the powering engine of SLSNe, and their implementation as cosmological probes in current and future surveys.
Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan
2015-01-01
Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997
The effect of dimethylsulfoxide on the water transport response of rat hepatocytes during freezing.
Smith, D J; Schulte, M; Bischof, J C
1998-10-01
Successful improvement of cryopreservation protocols for cells in suspension requires knowledge of how such cells respond to the biophysical stresses of freezing (intracellular ice formation, water transport) while in the presence of a cryoprotective agent (CPA). This work investigates the biophysical water transport response in a clinically important cell type--isolated hepatocytes--during freezing in the presence of dimethylsulfoxide (DMSO). Sprague-Dawley rat liver hepatocytes were frozen in Williams E media supplemented with 0, 1, and 2 M DMSO, at rates of 5, 10, and 50 degrees C/min. The water transport was measured by cell volumetric changes as assessed by cryomicroscopy and image analysis. Assuming that water is the only species transported under these conditions, a water transport model of the form dV/dT = f(Lpg([CPA]), ELp([CPA]), T(t)) was curve-fit to the experimental data to obtain the biophysical parameters of water transport--the reference hydraulic permeability (Lpg) and activation energy of water transport (ELp)--for each DMSO concentration. These parameters were estimated two ways: (1) by curve-fitting the model to the average volume of the pooled cell data, and (2) by curve-fitting individual cell volume data and averaging the resulting parameters. The experimental data showed that less dehydration occurs during freezing at a given rate in the presence of DMSO at temperatures between 0 and -10 degrees C. However, dehydration was able to continue at lower temperatures (< -10 degrees C) in the presence of DMSO. The values of Lpg and ELp obtained using the individual cell volume data both decreased from their non-CPA values--4.33 x 10(-13) m3/N-s (2.69 microns/min-atm) and 317 kJ/mol (75.9 kcal/mol), respectively--to 0.873 x 10(-13) m3/N-s (0.542 micron/min-atm) and 137 kJ/mol (32.8 kcal/mol), respectively, in 1 M DMSO and 0.715 x 10(-13) m3/N-s (0.444 micron/min-atm) and 107 kJ/mol (25.7 kcal/mol), respectively, in 2 M DMSO. The trends in the pooled volume values for Lpg and ELp were very similar, but the overall fit was considered worse than for the individual volume parameters. A unique way of presenting the curve-fitting results supports a clear trend of reduction of both biophysical parameters in the presence of DMSO, and no clear trend in cooling rate dependence of the biophysical parameters. In addition, these results suggest that close proximity of the experimental cell volume data to the equilibrium volume curve may significantly reduce the efficiency of the curve-fitting process.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
Mamen, Asgeir; Fredriksen, Per Morten
2018-05-01
As children's fitness continues to decline, frequent and systematic monitoring of fitness is important. Easy-to-use and low-cost methods with acceptable accuracy are essential in screening situations. This study aimed to investigate how the measurements of body mass index (BMI), waist circumference (WC) and waist-to-height ratio (WHtR) relate to selected measurements of fitness in children. A total of 1731 children from grades 1 to 6 were selected who had a complete set of height, body mass, running performance, handgrip strength and muscle mass measurements. A composite fitness score was established from the sum of sex- and age-specific z-scores for the variables running performance, handgrip strength and muscle mass. This fitness z-score was compared to z-scores and quartiles of BMI, WC and WHtR using analysis of variance, linear regression and receiver operator characteristic analysis. The regression analysis showed that z-scores for BMI, WC and WHtR all were linearly related to the composite fitness score, with WHtR having the highest R 2 at 0.80. The correct classification of fit and unfit was relatively high for all three measurements. WHtR had the best prediction of fitness of the three with an area under the curve of 0.92 ( p < 0.001). BMI, WC and WHtR were all found to be feasible measurements, but WHtR had a higher precision in its classification into fit and unfit in this population.
NASA Astrophysics Data System (ADS)
Metzger, Robert; Riper, Kenneth Van; Lasche, George
2017-09-01
A new method for analysis of uranium and radium in soils by gamma spectroscopy has been developed using VRF ("Visual RobFit") which, unlike traditional peak-search techniques, fits full-spectrum nuclide shapes with non-linear least-squares minimization of the chi-squared statistic. Gamma efficiency curves were developed for a 500 mL Marinelli beaker geometry as a function of soil density using MCNP. Collected spectra were then analyzed using the MCNP-generated efficiency curves and VRF to deconvolute the 90 keV peak complex of uranium and obtain 238U and 235U activities. 226Ra activity was determined either from the radon daughters if the equilibrium status is known, or directly from the deconvoluted 186 keV line. 228Ra values were determined from the 228Ac daughter activity. The method was validated by analysis of radium, thorium and uranium soil standards and by inter-comparison with other methods for radium in soils. The method allows for a rapid determination of whether a sample has been impacted by a man-made activity by comparison of the uranium and radium concentrations to those that would be expected from a natural equilibrium state.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378
Analysis of ALTAIR 1998 Meteor Radar Data
NASA Technical Reports Server (NTRS)
Zinn, J.; Close, S.; Colestock, P. L.; MacDonell, A.; Loveland, R.
2011-01-01
We describe a new analysis of a set of 32 UHF meteor radar traces recorded with the 422 MHz ALTAIR radar facility in November 1998. Emphasis is on the velocity measurements, and on inferences that can be drawn from them regarding the meteor masses and mass densities. We find that the velocity vs altitude data can be fitted as quadratic functions of the path integrals of the atmospheric densities vs distance, and deceleration rates derived from those fits all show the expected behavior of increasing with decreasing altitude. We also describe a computer model of the coupled processes of collisional heating, radiative cooling, evaporative cooling and ablation, and deceleration - for meteors composed of defined mixtures of mineral constituents. For each of the cases in the data set we ran the model starting with the measured initial velocity and trajectory inclination, and with various trial values of the quantity mPs 2 (the initial mass times the mass density squared), and then compared the computed deceleration vs altitude curves vs the measured ones. In this way we arrived at the best-fit values of the mPs 2 for each of the measured meteor traces. Then further, assuming various trial values of the density Ps, we compared the computed mass vs altitude curves with similar curves for the same set of meteors determined previously from the measured radar cross sections and an electrostatic scattering model. In this way we arrived at estimates of the best-fit mass densities Ps for each of the cases. Keywords meteor ALTAIR radar analysis 1 Introduction This paper describes a new analysis of a set of 422 MHz meteor scatter radar data recorded with the ALTAIR High-Power-Large-Aperture radar facility at Kwajalein Atoll on 18 November 1998. The exceptional accuracy/precision of the ALTAIR tracking data allow us to determine quite accurate meteor trajectories, velocities and deceleration rates. The measurements and velocity/deceleration data analysis are described in Sections II and III. The main point of this paper is to use these deceleration rate data, together with results from a computer model, to determine values of the quantities mPs 2 (the meteor mass times its material density squared); and further, by combining these m s 2 values with meteor mass estimates for the same set of meteors determined separately from measured radar scattering
Dust in the Small Magellanic Cloud
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Coyne, G. V.; Magalhaes, A. M.
1995-01-01
We discuss simultaneous dust model fits to our extinction and polarization data for the Small Magellanic Cloud (SMC) using existing dust models. Dust model fits to the wavelength dependent polarization are possible for stars with small lambda(sub max). They generally imply size distributions which are narrower and have smaller average sizes compared to those in the Galaxy. The best fits for the extinction curves are obtained with a power law size distribution. The typical, monotonic SMC extinction curve can be well fit with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grains. Amorphous carbon and silicate grains also fit the data well.
NASA Astrophysics Data System (ADS)
Graur, Or; Zurek, David R.; Rest, Armin; Seitenzahl, Ivo R.; Shappee, Benjamin J.; Fisher, Robert; Guillochon, James; Shara, Michael M.; Riess, Adam G.
2018-06-01
The late-time light curves of Type Ia supernovae (SNe Ia), observed >900 days after explosion, present the possibility of a new diagnostic for SN Ia progenitor and explosion models. First, however, we must discover what physical process (or processes) leads to the slow-down of the light curve relative to a pure 56Co decay, as observed in SNe 2011fe, 2012cg, and 2014J. We present Hubble Space Telescope observations of SN 2015F, taken ≈600–1040 days past maximum light. Unlike those of the three other SNe Ia, the light curve of SN 2015F remains consistent with being powered solely by the radioactive decay of 56Co. We fit the light curves of these four SNe Ia in a consistent manner and measure possible correlations between the light-curve stretch—a proxy for the intrinsic luminosity of the SN—and the parameters of the physical model used in the fit. We propose a new, late-time Phillips-like correlation between the stretch of the SNe and the shape of their late-time light curves, which we parameterize as the difference between their pseudo-bolometric luminosities at 600 and 900 days: ΔL 900 = log(L 600/L 900). Our analysis is based on only four SNe, so a larger sample is required to test the validity of this correlation. If true, this model-independent correlation provides a new way to test which physical process lies behind the slow-down of SN Ia light curves >900 days after explosion, and, ultimately, fresh constraints on the various SN Ia progenitor and explosion models.
Shu-Jiang, Liu; Zhan-Ying, Chen; Yin-Zhong, Chang; Shi-Lian, Wang; Qi, Li; Yuan-Qing, Fan
2013-10-11
Multidimensional gas chromatography is widely applied to atmospheric xenon monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). To improve the capability for xenon sampling from the atmosphere, sampling techniques have been investigated in detail. The sampling techniques are designed by xenon outflow curves which are influenced by many factors, and the injecting condition is one of the key factors that could influence the xenon outflow curves. In this paper, the xenon outflow curves of single-pulse injection in two-dimensional gas chromatography has been tested and fitted as a function of exponential modified Gaussian distribution. An inference formula of the xenon outflow curve for six-pulse injection is derived, and the inference formula is also tested to compare with its fitting formula of the xenon outflow curve. As a result, the curves of both the one-pulse and six-pulse injections obey the exponential modified Gaussian distribution when the temperature of the activated carbon column's temperature is 26°C and the flow rate of the carrier gas is 35.6mLmin(-1). The retention time of the xenon peak for one-pulse injection is 215min, and the peak width is 138min. For the six-pulse injection, however, the retention time is delayed to 255min, and the peak width broadens to 222min. According to the inferred formula of the xenon outflow curve for the six-pulse injection, the inferred retention time is 243min, the relative deviation of the retention time is 4.7%, and the inferred peak width is 225min, with a relative deviation of 1.3%. Copyright © 2013 Elsevier B.V. All rights reserved.
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
Milky Way Kinematics. II. A Uniform Inner Galaxy H I Terminal Velocity Curve
NASA Astrophysics Data System (ADS)
McClure-Griffiths, N. M.; Dickey, John M.
2016-11-01
Using atomic hydrogen (H I) data from the VLA Galactic Plane Survey, we measure the H I terminal velocity as a function of longitude for the first quadrant of the Milky Way. We use these data, together with our previous work on the fourth Galactic quadrant, to produce a densely sampled, uniformly measured, rotation curve of the northern and southern Milky Way between 3 {kpc}\\lt R\\lt 8 {kpc}. We determine a new joint rotation curve fit for the first and fourth quadrants, which is consistent with the fit we published in McClure-Griffiths & Dickey and can be used for estimating kinematic distances interior to the solar circle. Structure in the rotation curves is now exquisitely well defined, showing significant velocity structure on lengths of ˜200 pc, which is much greater than the spatial resolution of the rotation curve. Furthermore, the shape of the rotation curves for the first and fourth quadrants, even after subtraction of a circular rotation fit shows a surprising degree of correlation with a roughly sinusoidal pattern between 4.2\\lt R\\lt 7 kpc.
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-05-01
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
Zou, Kelly H; Resnic, Frederic S; Talos, Ion-Florin; Goldberg-Zimring, Daniel; Bhagwat, Jui G; Haker, Steven J; Kikinis, Ron; Jolesz, Ferenc A; Ohno-Machado, Lucila
2005-10-01
Medical classification accuracy studies often yield continuous data based on predictive models for treatment outcomes. A popular method for evaluating the performance of diagnostic tests is the receiver operating characteristic (ROC) curve analysis. The main objective was to develop a global statistical hypothesis test for assessing the goodness-of-fit (GOF) for parametric ROC curves via the bootstrap. A simple log (or logit) and a more flexible Box-Cox normality transformations were applied to untransformed or transformed data from two clinical studies to predict complications following percutaneous coronary interventions (PCIs) and for image-guided neurosurgical resection results predicted by tumor volume, respectively. We compared a non-parametric with a parametric binormal estimate of the underlying ROC curve. To construct such a GOF test, we used the non-parametric and parametric areas under the curve (AUCs) as the metrics, with a resulting p value reported. In the interventional cardiology example, logit and Box-Cox transformations of the predictive probabilities led to satisfactory AUCs (AUC=0.888; p=0.78, and AUC=0.888; p=0.73, respectively), while in the brain tumor resection example, log and Box-Cox transformations of the tumor size also led to satisfactory AUCs (AUC=0.898; p=0.61, and AUC=0.899; p=0.42, respectively). In contrast, significant departures from GOF were observed without applying any transformation prior to assuming a binormal model (AUC=0.766; p=0.004, and AUC=0.831; p=0.03), respectively. In both studies the p values suggested that transformations were important to consider before applying any binormal model to estimate the AUC. Our analyses also demonstrated and confirmed the predictive values of different classifiers for determining the interventional complications following PCIs and resection outcomes in image-guided neurosurgery.
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2015-12-01
Intensity-Duration-Frequency (IDF) curves are widely used in flood risk management because they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. Weather radars provide distributed rainfall estimates with high spatial and temporal resolutions and overcome the scarce representativeness of point-based rainfall for regions characterized by large gradients in rainfall climatology. This work explores the use of radar quantitative precipitation estimation (QPE) for the identification of IDF curves over a region with steep climatic transitions (Israel) using a unique radar data record (23 yr) and combined physical and empirical adjustment of the radar data. IDF relationships were derived by fitting a generalized extreme value distribution to the annual maximum series for durations of 20 min, 1 h and 4 h. Arid, semi-arid and Mediterranean climates were explored using 14 study cases. IDF curves derived from the study rain gauges were compared to those derived from radar and from nearby rain gauges characterized by similar climatology, taking into account the uncertainty linked with the fitting technique. Radar annual maxima and IDF curves were generally overestimated but in 70% of the cases (60% for a 100 yr return period), they lay within the rain gauge IDF confidence intervals. Overestimation tended to increase with return period, and this effect was enhanced in arid climates. This was mainly associated with radar estimation uncertainty, even if other effects, such as rain gauge temporal resolution, cannot be neglected. Climatological classification remained meaningful for the analysis of rainfall extremes and radar was able to discern climatology from rainfall frequency analysis.
Gaudion, Sarah L; Doma, Kenji; Sinclair, Wade; Banyard, Harry G; Woods, Carl T
2017-07-01
Gaudion, SL, Doma, K, Sinclair, W, Banyard, HG, and Woods, CT. Identifying the physical fitness, anthropometric and athletic movement qualities discriminant of developmental level in elite junior Australian football: implications for the development of talent. J Strength Cond Res 31(7): 1830-1839, 2017-This study aimed to identify the physical fitness, anthropometric and athletic movement qualities discriminant of developmental level in elite junior Australian football (AF). From a total of 77 players, 2 groups were defined according to their developmental level; under 16 (U16) (n = 40, 15.6 to 15.9 years), and U18 (n = 37, 17.1 to 17.9 years). Players performed a test battery consisting of 7 physical fitness assessments, 2 anthropometric measurements, and a fundamental athletic movement assessment. A multivariate analysis of variance tested the main effect of developmental level (2 levels: U16 and U18) on the assessment criterions, whilst binary logistic regression models and receiver operating characteristic (ROC) curves were built to identify the qualities most discriminant of developmental level. A significant effect of developmental level was evident on 9 of the assessments (d = 0.27-0.88; p ≤ 0.05). However, it was a combination of body mass, dynamic vertical jump height (nondominant leg), repeat sprint time, and the score on the 20-m multistage fitness test that provided the greatest association with developmental level (Akaike's information criterion = 80.84). The ROC curve was maximized with a combined score of 180.7, successfully discriminating 89 and 60% of the U18 and U16 players, respectively (area under the curve = 79.3%). These results indicate that there are distinctive physical fitness and anthropometric qualities discriminant of developmental level within the junior AF talent pathway. Coaches should consider these differences when designing training interventions at the U16 level to assist with the development of prospective U18 AF players.
Asteroid (367943) 2012 DA14 Flyby Spin State Analysis
NASA Astrophysics Data System (ADS)
Benson, Conor; Scheeres, Daniel J.; Moskovitz, Nicholas
2017-10-01
On February 15, 2013 asteroid 2012 DA14 experienced an extremely close Earth encounter, passing within 27700 km altitude. This flyby gave observers the chance to directly detect flyby-induced changes to the asteroid’s spin state and physical properties. The strongest shape and spin state constraints were provided by Goldstone delay-Doppler radar and visible-wavelength photometry taken after closest approach. These data indicated a roughly 40 m x 20 m object in non-principal axis rotation. NPA states are described by two fundamental periods. Pφ is the average precession period of the long/short axis about the angular momentum vector and Pψ is the rotation period about the long/short axis.WindowCLEAN (Belton & Gandhi 1988) power spectrum analysis of the post flyby light curve showed three prominent frequencies, two of which were 1:2 multiples of each other. Mueller et al. (2002) suggest peaks with this relationship are 1/Pφ and 2/Pφ, implying that Pφ = 6.35 hr. Likely values for Pψ were then 8.72, 13.95, or 23.39 hr. These Pφ,Pψ pairs yielded six candidate spin states in total, one LAM and one SAM per pair.Second to fourth order, two-dimensional Fourier series fits to the light curve were best for periods of 6.359 and 8.724 hr. The two other candidate pairs were also in the top ten fits. Inertia constraints of a roughly 2:1 uniform density ellipsoid eliminated two of the three SAM states. Using JPL Horizons ephemerides and Lambertian ellipsoids, simulated light curves were generated. The simulated and observed power spectra were then compared for all angular momentum poles and reasonable ellipsoid elongations. Only the Pφ = 6.359 hr and Pψ = 8.724 hr LAM state produced light curves consistent with the observed frequency structure. All other states were clearly incompatible. With two well-fitting poles found, phasing the initial attitude and angular velocity yielded plausible matches to the observed light curve. Neglecting gravitational torques, neither pole agreed with the observed pre-flyby light curve, suggesting that the asteroid’s spin state changed during the encounter, consistent with numerical simulation predictions. The consistency between the pre-flyby observations and simulated states will be discussed.
J. Chris Toney; Karen G. Schleeweis; Jennifer Dungan; Andrew Michaelis; Todd Schroeder; Gretchen G. Moisen
2015-01-01
The North American Forest Dynamics (NAFD) projectâs Attribution Team is completing nationwide processing of historic Landsat data to provide a comprehensive annual, wall-to-wall analysis of US disturbance history, with attribution, over the last 25+ years. Per-pixel time series analysis based on a new nonparametric curve fitting algorithm yields several metrics useful...
Bayesian inference in an item response theory model with a generalized student t link function
NASA Astrophysics Data System (ADS)
Azevedo, Caio L. N.; Migon, Helio S.
2012-10-01
In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.
Díaz Alonso, Fernando; González Ferradás, Enrique; Sánchez Pérez, Juan Francisco; Miñana Aznar, Agustín; Ruiz Gimeno, José; Martínez Alonso, Jesús
2006-09-21
A number of models have been proposed to calculate overpressure and impulse from accidental industrial explosions. When the blast is produced by ignition of a vapour cloud, the TNO Multi-Energy model is widely used. From the curves given by this model, data are fitted to obtain equations showing the relationship between overpressure, impulse and distance. These equations, referred herein as characteristic curves, can be fitted by means of power equations, which depend on explosion energy and charge strength. Characteristic curves allow the determination of overpressure and impulse at each distance.
Analysis on the vegetation phenology of tropical seasonal rain forest in South America
NASA Astrophysics Data System (ADS)
Liang, B.; Chen, X.
2016-12-01
Using Global Land Surface Satellite (GLASS) LAI data during 1982 to 2003, we analyzed spatial and temporal variations of vegetation phenology in the tropical seasonal rain forest of South America. Several methods were used to fit seasonal LAI curves and extract start (SOS) and end (EOS) of the growing season. The results show that Fourier function can most effectively fit LAI curves, and yearly RMSEs for differences between observed and fitted LAI values are less than 0.01. The SOS ranged from 250 to 350 days of year, and occurred earlier in west than in east. Contrarily, the EOS were between 120 and 180 days of year, and appeared earlier in east than in west. Thus, the growing season was longer in west than in east. With regard to linear trends, SOS shows a significant advancement at 7% of pixels and a significant delay at 13% of pixels, whereas EOS advanced significantly at 16% of pixels and was delayed significantly at 18% of pixels. Preseason precipitation is the main influence factor of SOS and EOS in the tropical seasonal rain forest of South America.
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
Revisiting the Estimation of Dinosaur Growth Rates
Myhrvold, Nathan P.
2013-01-01
Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133
NASA Astrophysics Data System (ADS)
Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya
2018-03-01
A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.
A curve fitting method for solving the flutter equation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cooper, J. L.
1972-01-01
A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
76 FR 9696 - Equipment Price Forecasting in Energy Conservation Standards Analysis
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-22
... for particular efficiency design options, an empirical experience curve fit to the available data may be used to forecast future costs of such design option technologies. If a statistical evaluation indicates a low level of confidence in estimates of the design option cost trend, this method should not be...
ERIC Educational Resources Information Center
Krausert, Christopher R.; Ying, Di; Zhang, Yu; Jiang, Jack J.
2011-01-01
Purpose: Digital kymography and vocal fold curve fitting are blended with detailed symmetry analysis of kymograms to provide a comprehensive characterization of the vibratory properties of injured vocal folds. Method: Vocal fold vibration of 12 excised canine larynges was recorded under uninjured, unilaterally injured, and bilaterally injured…
A direct potential fitting RKR method: Semiclassical vs. quantal comparisons
NASA Astrophysics Data System (ADS)
Tellinghuisen, Joel
2016-12-01
Quantal and semiclassical (SC) eigenvalues are compared for three diatomic molecular potential curves: the X state of CO, the X state of Rb2, and the A state of I2. The comparisons show higher levels of agreement than generally recognized, when the SC calculations incorporate a quantum defect correction to the vibrational quantum number, in keeping with the Kaiser modification. One particular aspect of this is better agreement between quantal and SC estimates of the zero-point vibrational energy, supporting the need for the Y00 correction in this context. The pursuit of a direct-potential-fitting (DPF) RKR method is motivated by the notion that some of the limitations of RKR potentials may be innate, from their generation by an exact inversion of approximate quantities: the vibrational energy Gυ and rotational constant Bυ from least-squares analysis of spectroscopic data. In contrast, the DPF RKR method resembles the quantal DPF methods now increasingly used to analyze diatomic spectral data, but with the eigenvalues obtained from SC phase integrals. Application of this method to the analysis of 9500 assigned lines in the I2A ← X spectrum fails to alter the quantal-SC disparities found for the A-state RKR curve from a previous analysis. On the other hand, the SC method can be much faster than the quantal method in exploratory work with different potential functions, where it is convenient to use finite-difference methods to evaluate the partial derivatives required in nonlinear fitting.
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
Ernst, Dominique; Köhler, Jürgen
2013-01-21
We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waszczak, Adam; Chang, Chan-Kao; Cheng, Yu-Chi
We fit 54,296 sparsely sampled asteroid light curves in the Palomar Transient Factory survey to a combined rotation plus phase-function model. Each light curve consists of 20 or more observations acquired in a single opposition. Using 805 asteroids in our sample that have reference periods in the literature, we find that the reliability of our fitted periods is a complicated function of the period, amplitude, apparent magnitude, and other light-curve attributes. Using the 805-asteroid ground-truth sample, we train an automated classifier to estimate (along with manual inspection) the validity of the remaining ∼53,000 fitted periods. By this method we findmore » that 9033 of our light curves (of ∼8300 unique asteroids) have “reliable” periods. Subsequent consideration of asteroids with multiple light-curve fits indicates a 4% contamination in these “reliable” periods. For 3902 light curves with sufficient phase-angle coverage and either a reliable fit period or low amplitude, we examine the distribution of several phase-function parameters, none of which are bimodal though all correlate with the bond albedo and with visible-band colors. Comparing the theoretical maximal spin rate of a fluid body with our amplitude versus spin-rate distribution suggests that, if held together only by self-gravity, most asteroids are in general less dense than ∼2 g cm{sup −3}, while C types have a lower limit of between 1 and 2 g cm{sup −3}. These results are in agreement with previous density estimates. For 5–20 km diameters, S types rotate faster and have lower amplitudes than C types. If both populations share the same angular momentum, this may indicate the two types’ differing ability to deform under rotational stress. Lastly, we compare our absolute magnitudes (and apparent-magnitude residuals) to those of the Minor Planet Center’s nominal (G = 0.15, rotation-neglecting) model; our phase-function plus Fourier-series fitting reduces asteroid photometric rms scatter by a factor of ∼3.« less
Analysis of S2QA- charge recombination with the Arrhenius, Eyring and Marcus theories.
Rantamäki, Susanne; Tyystjärvi, Esa
2011-01-01
The Q band of photosynthetic thermoluminescence, measured in the presence of a herbicide that blocks electron transfer from PSII, is associated with recombination of the S(2)Q(A)(-) charge pair. The same charge recombination reaction can be monitored with chlorophyll fluorescence. It has been shown that the recombination occurs via three competing routes of which one produces luminescence. In the present study, we measured the thermoluminescence Q band and the decay of chlorophyll fluorescence yield after a single turnover flash at different temperatures from spinach thylakoids. The data were analyzed using the commonly used Arrhenius theory, the Eyring rate theory and the Marcus theory of electron transfer. The fitting error was minimized for both thermoluminescence and fluorescence by adjusting the global, phenomenological constants obtained when the reaction rate theories were applied to the multi-step recombination reaction. For chlorophyll fluorescence, all three theories give decent fits. The peak position of the thermoluminescence Q band is correct by all theories but the form of the Q band is somewhat different in curves predicted by the three theories. The Eyring and Marcus theories give good fits for the decreasing part of the thermoluminescence curve and Marcus theory gives the closest fit for the rising part. Copyright © 2011 Elsevier B.V. All rights reserved.
Modeling two strains of disease via aggregate-level infectivity curves.
Romanescu, Razvan; Deardon, Rob
2016-04-01
Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.
NASA Astrophysics Data System (ADS)
Ji, Zhong-Ye; Zhang, Xiao-Fang
2018-01-01
The mathematical relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam is important in beam quality control theory of the high-energy laser weapon system. In order to obtain this mathematical relation, numerical simulation is used in the research. Firstly, the Zernike representations of typically distorted atmospheric wavefront aberrations caused by the Kolmogoroff turbulence are generated. And then, the corresponding beam quality β factors of the different distorted wavefronts are calculated numerically through fast Fourier transform. Thus, the statistical distribution rule between the beam quality β factors of high-energy laser and the wavefront aberrations of the beam can be established by the calculated results. Finally, curve fitting method is chosen to establish the mathematical fitting relationship of these two parameters. And the result of the curve fitting shows that there is a quadratic curve relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam. And in this paper, 3 fitting curves, in which the wavefront aberrations are consisted of Zernike Polynomials of 20, 36, 60 orders individually, are established to express the relationship between the beam quality β factor and atmospheric wavefront aberrations with different spatial frequency.
Bancalari, Elena; Bernini, Valentina; Bottari, Benedetta; Neviani, Erasmo; Gatti, Monica
2016-01-01
Impedance microbiology is a method that enables tracing microbial growth by measuring the change in the electrical conductivity. Different systems, able to perform this measurement, are available in commerce and are commonly used for food control analysis by mean of measuring a point of the impedance curve, defined "time of detection." With this work we wanted to find an objective way to interpret the metabolic significance of impedance curves and propose it as a valid approach to evaluate the potential acidifying performances of starter lactic acid bacteria to be employed in milk transformation. To do this it was firstly investigated the possibility to use the Gompertz equation to describe the data coming from the impedance curve obtained by mean of BacTrac 4300®. Lag time (λ), maximum specific M% rate (μmax), and maximum value of M% (Yend) have been calculated and, given the similarity of the impedance fitted curve to the bacterial growth curve, their meaning has been interpreted. Potential acidifying performances of eighty strains belonging to Lactobacillus helveticus, Lactobacillus delbrueckii subsp. bulgaricus, Lactococcus lactis , and Streptococcus thermophilus species have been evaluated by using the kinetics parameters, obtained from Excel add-in DMFit version 2.1. The novelty and importance of our findings, obtained by means of BacTrac 4300®, is that they can also be applied to data obtained from other devices. Moreover, the meaning of λ, μmax, and Yend that we have extrapolated from Modified Gompertz equation and discussed for lactic acid bacteria in milk, can be exploited also to other food environment or other bacteria, assuming that they can give a curve and that curve is properly fitted with Gompertz equation.
Bancalari, Elena; Bernini, Valentina; Bottari, Benedetta; Neviani, Erasmo; Gatti, Monica
2016-01-01
Impedance microbiology is a method that enables tracing microbial growth by measuring the change in the electrical conductivity. Different systems, able to perform this measurement, are available in commerce and are commonly used for food control analysis by mean of measuring a point of the impedance curve, defined “time of detection.” With this work we wanted to find an objective way to interpret the metabolic significance of impedance curves and propose it as a valid approach to evaluate the potential acidifying performances of starter lactic acid bacteria to be employed in milk transformation. To do this it was firstly investigated the possibility to use the Gompertz equation to describe the data coming from the impedance curve obtained by mean of BacTrac 4300®. Lag time (λ), maximum specific M% rate (μmax), and maximum value of M% (Yend) have been calculated and, given the similarity of the impedance fitted curve to the bacterial growth curve, their meaning has been interpreted. Potential acidifying performances of eighty strains belonging to Lactobacillus helveticus, Lactobacillus delbrueckii subsp. bulgaricus, Lactococcus lactis, and Streptococcus thermophilus species have been evaluated by using the kinetics parameters, obtained from Excel add-in DMFit version 2.1. The novelty and importance of our findings, obtained by means of BacTrac 4300®, is that they can also be applied to data obtained from other devices. Moreover, the meaning of λ, μmax, and Yend that we have extrapolated from Modified Gompertz equation and discussed for lactic acid bacteria in milk, can be exploited also to other food environment or other bacteria, assuming that they can give a curve and that curve is properly fitted with Gompertz equation. PMID:27799925
NASA Astrophysics Data System (ADS)
Li, Xin; Tang, Li; Lin, Hai-Nan
2017-05-01
We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)
UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, N. E.; Soderberg, A. M.; Betancourt, M., E-mail: nsanders@cfa.harvard.edu
2015-02-10
Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. Wemore » present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.« less
Dynamic Analysis of Sounding Rocket Pneumatic System Revision
NASA Technical Reports Server (NTRS)
Armen, Jerald
2010-01-01
The recent fusion of decades of advancements in mathematical models, numerical algorithms and curve fitting techniques marked the beginning of a new era in the science of simulation. It is becoming indispensable to the study of rockets and aerospace analysis. In pneumatic system, which is the main focus of this paper, particular emphasis will be placed on the efforts of compressible flow in Attitude Control System of sounding rocket.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
Zhang, Gang-Chun; Lin, Hong-Liang; Lin, Shan-Yang
2012-07-01
The cocrystal formation of indomethacin (IMC) and saccharin (SAC) by mechanical cogrinding or thermal treatment was investigated. The formation mechanism and stability of IMC-SAC cocrystal prepared by cogrinding process were explored. Typical IMC-SAC cocrystal was also prepared by solvent evaporation method. All the samples were identified and characterized by using differential scanning calorimetry (DSC) and Fourier transform infrared (FTIR) microspectroscopy with curve-fitting analysis. The physical stability of different IMC-SAC ground mixtures before and after storage for 7 months was examined. The results demonstrate that the stepwise measurements were carried out at specific intervals over a continuous cogrinding process showing a continuous growth in the cocrystal formation between IMC and SAC. The main IR spectral shifts from 3371 to 3,347 cm(-1) and 1693 to 1682 cm(-1) for IMC, as well as from 3094 to 3136 cm(-1) and 1718 to 1735 cm(-1) for SAC suggested that the OH and NH groups in both chemical structures were taken part in a hydrogen bonding, leading to the formation of IMC-SAC cocrystal. A melting at 184 °C for the 30-min IMC-SAC ground mixture was almost the same as the melting at 184 °C for the solvent-evaporated IMC-SAC cocrystal. The 30-min IMC-SAC ground mixture was also confirmed to have similar components and contents to that of the solvent-evaporated IMC-SAC cocrystal by using a curve-fitting analysis from IR spectra. The thermal-induced IMC-SAC cocrystal formation was also found to be dependent on the temperature treated. Different IMC-SAC ground mixtures after storage at 25 °C/40% RH condition for 7 months had an improved tendency of IMC-SAC cocrystallization. Copyright © 2012 Elsevier B.V. All rights reserved.
Growthcurver: an R package for obtaining interpretable metrics from microbial growth curves.
Sprouffske, Kathleen; Wagner, Andreas
2016-04-19
Plate readers can measure the growth curves of many microbial strains in a high-throughput fashion. The hundreds of absorbance readings collected simultaneously for hundreds of samples create technical hurdles for data analysis. Growthcurver summarizes the growth characteristics of microbial growth curve experiments conducted in a plate reader. The data are fitted to a standard form of the logistic equation, and the parameters have clear interpretations on population-level characteristics, like doubling time, carrying capacity, and growth rate. Growthcurver is an easy-to-use R package available for installation from the Comprehensive R Archive Network (CRAN). The source code is available under the GNU General Public License and can be obtained from Github (Sprouffske K, Growthcurver sourcecode, 2016).
Study on peak shape fitting method in radon progeny measurement.
Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju
2015-11-01
Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
NASA Astrophysics Data System (ADS)
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less
NASA Astrophysics Data System (ADS)
Lai, Chia-Lin; Lee, Jhih-Shian; Chen, Jyh-Cheng
2015-02-01
Energy-mapping, the conversion of linear attenuation coefficients (μ) calculated at the effective computed tomography (CT) energy to those corresponding to 511 keV, is an important step in CT-based attenuation correction (CTAC) for positron emission tomography (PET) quantification. The aim of this study was to implement energy-mapping step by using curve fitting ability of artificial neural network (ANN). Eleven digital phantoms simulated by Geant4 application for tomographic emission (GATE) and 12 physical phantoms composed of various volume concentrations of iodine contrast were used in this study to generate energy-mapping curves by acquiring average CT values and linear attenuation coefficients at 511 keV of these phantoms. The curves were built with ANN toolbox in MATLAB. To evaluate the effectiveness of the proposed method, another two digital phantoms (liver and spine-bone) and three physical phantoms (volume concentrations of 3%, 10% and 20%) were used to compare the energy-mapping curves built by ANN and bilinear transformation, and a semi-quantitative analysis was proceeded by injecting 0.5 mCi FDG into a SD rat for micro-PET scanning. The results showed that the percentage relative difference (PRD) values of digital liver and spine-bone phantom are 5.46% and 1.28% based on ANN, and 19.21% and 1.87% based on bilinear transformation. For 3%, 10% and 20% physical phantoms, the PRD values of ANN curve are 0.91%, 0.70% and 3.70%, and the PRD values of bilinear transformation are 3.80%, 1.44% and 4.30%, respectively. Both digital and physical phantoms indicated that the ANN curve can achieve better performance than bilinear transformation. The semi-quantitative analysis of rat PET images showed that the ANN curve can reduce the inaccuracy caused by attenuation effect from 13.75% to 4.43% in brain tissue, and 23.26% to 9.41% in heart tissue. On the other hand, the inaccuracy remained 6.47% and 11.51% in brain and heart tissue when the bilinear transformation was used. Overall, it can be concluded that the bilinear transformation method resulted in considerable bias and the newly proposed calibration curve built by ANN could achieve better results with acceptable accuracy.
NASA Astrophysics Data System (ADS)
Ye, L.; Xu, X.; Luan, D.; Jiang, W.; Kang, Z.
2017-07-01
Crater-detection approaches can be divided into four categories: manual recognition, shape-profile fitting algorithms, machine-learning methods and geological information-based analysis using terrain and spectral data. The mainstream method is Shape-profile fitting algorithms. Many scholars throughout the world use the illumination gradient information to fit standard circles by least square method. Although this method has achieved good results, it is difficult to identify the craters with poor "visibility", complex structure and composition. Moreover, the accuracy of recognition is difficult to be improved due to the multiple solutions and noise interference. Aiming at the problem, we propose a method for the automatic extraction of impact craters based on spectral characteristics of the moon rocks and minerals: 1) Under the condition of sunlight, the impact craters are extracted from MI by condition matching and the positions as well as diameters of the craters are obtained. 2) Regolith is spilled while lunar is impacted and one of the elements of lunar regolith is iron. Therefore, incorrectly extracted impact craters can be removed by judging whether the crater contains "non iron" element. 3) Craters which are extracted correctly, are divided into two types: simple type and complex type according to their diameters. 4) Get the information of titanium and match the titanium distribution of the complex craters with normal distribution curve, then calculate the goodness of fit and set the threshold. The complex craters can be divided into two types: normal distribution curve type of titanium and non normal distribution curve type of titanium. We validated our proposed method with MI acquired by SELENE. Experimental results demonstrate that the proposed method has good performance in the test area.
2011-01-01
Background Conservative scoliosis therapy according to the FITS Concept is applied as a unique treatment or in combination with corrective bracing. The aim of the study was to present author's method of diagnosis and therapy for idiopathic scoliosis FITS-Functional Individual Therapy of Scoliosis and to analyze the early results of FITS therapy in a series of consecutive patients. Methods The analysis comprised separately: (1) single structural thoracic, thoracolumbar or lumbar curves and (2) double structural scoliosis-thoracic and thoracolumbar or lumbar curves. The Cobb angle and Risser sign were analyzed at the initial stage and at the 2.8-year follow-up. The percentage of patients improved (defined as decrease of Cobb angle of more than 5 degrees), stable (+/- 5 degrees), and progressed (increase of Cobb angle of more than 5 degrees) was calculated. The clinical assessment comprised: the Angle of Trunk Rotation (ATR) initial and follow-up value, the plumb line imbalance, the scapulae level and the distance from the apical spinous process of the primary curve to the plumb line. Results In the Group A: (1) in single structural scoliosis 50,0% of patients improved, 46,2% were stable and 3,8% progressed, while (2) in double scoliosis 50,0% of patients improved, 30,8% were stable and 19,2% progressed. In the Group B: (1) in single scoliosis 20,0% of patients improved, 80,0% were stable, no patient progressed, while (2) in double scoliosis 28,1% of patients improved, 46,9% were stable and 25,0% progressed. Conclusion Best results were obtained in 10-25 degrees scoliosis which is a good indication to start therapy before more structural changes within the spine establish. PMID:22122964
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
Statistical analysis of landing contact conditions for three lifting body research vehicles
NASA Technical Reports Server (NTRS)
Larson, R. R.
1972-01-01
The landing contact conditions for the HL-10, M2-F2/F3, and the X-24A lifting body vehicles are analyzed statistically for 81 landings. The landing contact parameters analyzed are true airspeed, peak normal acceleration at the center of gravity, roll angle, and roll velocity. Ground measurement parameters analyzed are lateral and longitudinal distance from intended touchdown, lateral distance from touchdown to full stop, and rollout distance. The results are presented in the form of histograms for frequency distributions and cumulative frequency distribution probability curves with a Pearson Type 3 curve fit for extrapolation purposes.
Hybrid Micro-Electro-Mechanical Tunable Filter
2007-09-01
Figure 2.10), one can see the developers have used surface micromachining techniques to build the micromirror structure over the CMOS addressing...DBRs, microcavity composition, initial air gap, contact layers, substrate Dispersion Data Curve -fit dispersion data or generate dispersion function...measurements • Curve -fit the dispersion data or generate a continuous, wavelength-dependent, representation of material dispersion • Manually design the
Consideration of Wear Rates at High Velocities
2010-03-01
Strain vs. Three-dimensional Model . . . . . . . . . . . . 57 3.11 Example Single Asperity Wear Rate Integral . . . . . . . . . . 58 4.1 Third Stage...Slipper Accumulated Frictional Heating . . . . . . 67 4.2 Surface Temperature Third Stage Slipper, ave=0.5 . . . . . . . 67 4.3 Melt Depth Example...64 A3S Coefficient for Frictional Heat Curve Fit, Third Stage Slipper 66 B3S Coefficient for Frictional Heat Curve Fit, Third
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.
ERIC Educational Resources Information Center
Savalei, Victoria
2012-01-01
The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…
ERIC Educational Resources Information Center
Campo, Antonio; Rodriguez, Franklin
1998-01-01
Presents two alternative computational procedures for solving the modified Bessel equation of zero order: the Frobenius method, and the power series method coupled with a curve fit. Students in heat transfer courses can benefit from these alternative procedures; a course on ordinary differential equations is the only mathematical background that…
Analysis of Wien filter spectra from Hall thruster plumes.
Huang, Wensheng; Shastry, Rohit
2015-07-01
A method for analyzing the Wien filter spectra obtained from the plumes of Hall thrusters is derived and presented. The new method extends upon prior work by deriving the integration equations for the current and species fractions. Wien filter spectra from the plume of the NASA-300M Hall thruster are analyzed with the presented method and the results are used to examine key trends. The new integration method is found to produce results slightly different from the traditional area-under-the-curve method. The use of different velocity distribution forms when performing curve-fits to the peaks in the spectra is compared. Additional comparison is made with the scenario where the current fractions are assumed to be proportional to the heights of peaks. The comparison suggests that the calculated current fractions are not sensitive to the choice of form as long as both the height and width of the peaks are accounted for. Conversely, forms that only account for the height of the peaks produce inaccurate results. Also presented are the equations for estimating the uncertainty associated with applying curve fits and charge-exchange corrections. These uncertainty equations can be used to plan the geometry of the experimental setup.
Verstraeten, B.; Sermeus, J.; Salenbien, R.; Fivez, J.; Shkerdin, G.; Glorieux, C.
2015-01-01
The underlying working principle of detecting impulsive stimulated scattering signals in a differential configuration of heterodyne diffraction detection is unraveled by involving optical scattering theory. The feasibility of the method for the thermoelastic characterization of coating-substrate systems is demonstrated on the basis of simulated data containing typical levels of noise. Besides the classical analysis of the photoacoustic part of the signals, which involves fitting surface acoustic wave dispersion curves, the photothermal part of the signals is analyzed by introducing thermal wave dispersion curves to represent and interpret their grating wavelength dependence. The intrinsic possibilities and limitations of both inverse problems are quantified by making use of least and most squares analysis. PMID:26236643
Using quasars as standard clocks for measuring cosmological redshift.
Dai, De-Chang; Starkman, Glenn D; Stojkovic, Branislav; Stojkovic, Dejan; Weltman, Amanda
2012-06-08
We report hitherto unnoticed patterns in quasar light curves. We characterize segments of the quasar's light curves with the slopes of the straight lines fit through them. These slopes appear to be directly related to the quasars' redshifts. Alternatively, using only global shifts in time and flux, we are able to find significant overlaps between the light curves of different pairs of quasars by fitting the ratio of their redshifts. We are then able to reliably determine the redshift of one quasar from another. This implies that one can use quasars as standard clocks, as we explicitly demonstrate by constructing two independent methods of finding the redshift of a quasar from its light curve.
AN ANALYSIS OF THE SHAPES OF INTERSTELLAR EXTINCTION CURVES. VI. THE NEAR-IR EXTINCTION LAW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzpatrick, E. L.; Massa, D.
We combine new observations from the Hubble Space Telescope's Advanced Camera of Survey with existing data to investigate the wavelength dependence of near-IR (NIR) extinction. Previous studies suggest a power law form for NIR extinction, with a 'universal' value of the exponent, although some recent observations indicate that significant sight line-to-sight line variability may exist. We show that a power-law model for the NIR extinction provides an excellent fit to most extinction curves, but that the value of the power, {beta}, varies significantly from sight line to sight line. Therefore, it seems that a 'universal NIR extinction law' is notmore » possible. Instead, we find that as {beta} decreases, R(V) {identical_to} A(V)/E(B - V) tends to increase, suggesting that NIR extinction curves which have been considered 'peculiar' may, in fact, be typical for different R(V) values. We show that the power-law parameters can depend on the wavelength interval used to derive them, with the {beta} increasing as longer wavelengths are included. This result implies that extrapolating power-law fits to determine R(V) is unreliable. To avoid this problem, we adopt a different functional form for NIR extinction. This new form mimics a power law whose exponent increases with wavelength, has only two free parameters, can fit all of our curves over a longer wavelength baseline and to higher precision, and produces R(V) values which are consistent with independent estimates and commonly used methods for estimating R(V). Furthermore, unlike the power-law model, it gives R(V)s that are independent of the wavelength interval used to derive them. It also suggests that the relation R(V) = -1.36 E(K-V)/(E(B-V)) - 0.79 can estimate R(V) to {+-}0.12. Finally, we use model extinction curves to show that our extinction curves are in accord with theoretical expectations, and demonstrate how large samples of observational quantities can provide useful constraints on the grain properties.« less
NASA Astrophysics Data System (ADS)
He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.
2018-04-01
With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.
Surface fitting three-dimensional bodies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1974-01-01
The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.
Focusing of light through turbid media by curve fitting optimization
NASA Astrophysics Data System (ADS)
Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi
2016-12-01
The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.
Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.
Roman, A; Ahmed, K; Challacombe, B
2016-05-01
Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
A LIGHT CURVE ANALYSIS OF CLASSICAL NOVAE: FREE-FREE EMISSION VERSUS PHOTOSPHERIC EMISSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hachisu, Izumi; Kato, Mariko, E-mail: hachisu@ea.c.u-tokyo.ac.jp, E-mail: mariko@educ.cc.keio.ac.jp
2015-01-10
We analyzed light curves of seven relatively slower novae, PW Vul, V705 Cas, GQ Mus, RR Pic, V5558 Sgr, HR Del, and V723 Cas, based on an optically thick wind theory of nova outbursts. For fast novae, free-free emission dominates the spectrum in optical bands rather than photospheric emission, and nova optical light curves follow the universal decline law. Faster novae blow stronger winds with larger mass-loss rates. Because the brightness of free-free emission depends directly on the wind mass-loss rate, faster novae show brighter optical maxima. In slower novae, however, we must take into account photospheric emission because of theirmore » lower wind mass-loss rates. We calculated three model light curves of free-free emission, photospheric emission, and their sum for various white dwarf (WD) masses with various chemical compositions of their envelopes and fitted reasonably with observational data of optical, near-IR (NIR), and UV bands. From light curve fittings of the seven novae, we estimated their absolute magnitudes, distances, and WD masses. In PW Vul and V705 Cas, free-free emission still dominates the spectrum in the optical and NIR bands. In the very slow novae, RR Pic, V5558 Sgr, HR Del, and V723 Cas, photospheric emission dominates the spectrum rather than free-free emission, which makes a deviation from the universal decline law. We have confirmed that the absolute brightnesses of our model light curves are consistent with the distance moduli of four classical novae with known distances (GK Per, V603 Aql, RR Pic, and DQ Her). We also discussed the reason why the very slow novae are about ∼1 mag brighter than the proposed maximum magnitude versus rate of decline relation.« less
ERIC Educational Resources Information Center
Ferrer, Emilio; Hamagami, Fumiaki; McArdle, John J.
2004-01-01
This article offers different examples of how to fit latent growth curve (LGC) models to longitudinal data using a variety of different software programs (i.e., LISREL, Mx, Mplus, AMOS, SAS). The article shows how the same model can be fitted using both structural equation modeling and multilevel software, with nearly identical results, even in…
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2009-01-01
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M
2014-02-01
Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Fu, W.; Gu, L.; Hoffman, F. M.
2013-12-01
The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance takes into account the influence of CO2 from mitochondria, comparing to the commonly used constant value of mesophyll conductance. We show that after considering this effect the other parameters of the photosynthesis model can be re-estimated. Our results indicate that variable mesophyll conductance has most effect on the estimation of the parameter of the maximum electron transport rate (Jmax), but has a negligible impact on the estimated day respiration (Rd) and photocompensation point (<2%).
Photoelectric photometry of the RS CVn binary EI Eridani = HD 26337
NASA Technical Reports Server (NTRS)
Hooten, J. T.; Strassmeier, K. G.; Hall, D. S.; Barksdale, W. S., Jr.; Bertoglio, A.
1989-01-01
Differential UBV(RI)sub KC and UBVRI photometry of the RS CVn binary EI Eridani obtained during December 1987 and January 1988 at fourteen different observatories is presented. A combined visual bandpass light curve, corrected for systematic errors of different observatories, utilizes the photometric period of 1,945 days to produce useful results. The analysis shows the visual light curve to have twin maxima, separated by about 0.4 phase, and a full amplitude of approximately 0.06 mag for the period of observation, a smaller amplitude than reported in the past. The decrease in amplitude may be due to a decrease or homogenization of spot coverage. To fit the asymmetrical light curve, a starspot model would have to employ at least two spotted regions separated in longitude.
The ontogeny of tolerance curves: habitat quality vs. acclimation in a stressful environment.
Nougué, Odrade; Svendsen, Nils; Jabbour-Zahab, Roula; Lenormand, Thomas; Chevin, Luis-Miguel
2016-11-01
Stressful environments affect life-history components of fitness through (i) instantaneous detrimental effects, (ii) historical (carry-over) effects and (iii) history-by-environment interactions, including acclimation effects. The relative contributions of these different responses to environmental stress are likely to change along life, but such ontogenic perspective is often overlooked in studies of tolerance curves, precluding a better understanding of the causes of costs of acclimation, and more generally of fitness in temporally fine-grained environments. We performed an experiment in the brine shrimp Artemia to disentangle these different contributions to environmental tolerance, and investigate how they unfold along life. We placed individuals from three clones of A. parthenogenetica over a range of salinities during a week, before transferring them to a (possibly) different salinity for the rest of their lives. We monitored individual survival at repeated intervals throughout life, instead of measuring survival or performance at a given point in time, as commonly done in acclimation experiments. We then designed a modified survival analysis model to estimate phase-specific hazard rates, accounting for the fact that individuals may share the same treatment for only part of their lives. Our approach allowed us to distinguish effects of salinity on (i) instantaneous mortality in each phase (habitat quality effects), (ii) mortality later in life (history effects) and (iii) their interaction. We showed clear effects of early salinity on late survival and interactions between effects of past and current environments on survival. Importantly, analysis of the ontogenetic dynamics of the tolerance curve reveals that acclimation affects different parts of the curve at different ages. Adopting a dynamical view of the ontogeny of tolerance curve should prove useful for understanding niche limits in temporally changing environments, where the full sequence of environments experienced by an individual determines its overall environmental tolerance, and how it changes throughout life. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
NASA Astrophysics Data System (ADS)
Witek, M.; van der Lee, S.; Kang, T. S.; Chang, S. J.; Ning, J.; Ning, S.
2017-12-01
We have measured Rayleigh wave group velocity dispersion curves from one year of station-pair cross-correlations of continuous vertical-component broadband data from 1082 seismic stations in regional networks across China, Korea, Taiwan, and Japan for the year 2011. We use the measurements to map local dispersion anomalies for periods in the range 6-40 s. We combined our ambient noise data set with the earthquake group velocity data set of Ma et al. (2014), and then applied agglomerative hierarchical clustering to the localized dispersion curves. We find that the dispersion curves naturally organize themselves into distinct tectonic regions. For our distribution of interstation distances, only 8 distinct regions need to be defined. Additional clusters reduce the overall data misfit by increasingly smaller amounts. The size and number of clusters needed to suitably predict the data may give an indication of the resolving power of the data set. The regions that emerge from the cluster analysis include Tibet, the Sea of Japan, the South China Block and the Korean peninsula, the Ordos and Yangtze cratons, and Mesozoic rift basins such as the Songliao, Bohai Bay and Ulleung basins. We also performed a traditional inversion for 3D S-velocity structure, and the resulting model fits the data as well as the 8-cluster model, while both models fit the earthquake data and ambient noise data better than the LITHO1.0 model of Pasyanos et al. (2014). Our 3D model of the crust and upper mantle confirms many of the features seen in previous studies of the region, most notably the lithospheric thinning going from west to east and low velocity zones in the crust on the Tibetan periphery. We conclude that cluster analysis is able to greatly reduce the dimensionality of surface wave dispersion data, in the sense that a data set of over half a million dispersion curves is sufficiently predicted by appropriately averaging over a relatively small set of distinct tectonic regions. The resulting clustered model objectively quantifies the more intuitive ways in which we usually tend to interpret tomographic models.
The training and learning process of transseptal puncture using a modified technique.
Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom
2013-12-01
As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca
2012-07-15
Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less
Development of a program to fit data to a new logistic model for microbial growth.
Fujikawa, Hiroshi; Kano, Yoshihiro
2009-06-01
Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.
Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts
NASA Technical Reports Server (NTRS)
Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.
2007-01-01
We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.
In-pile electrochemical measurements on AISI 316 L(N) IG and EUROFER 97 I: experimental results
NASA Astrophysics Data System (ADS)
Vankeerberghen, Marc; Bosch, Rik-Wouter; Van Nieuwenhoven, Rudi
2003-02-01
In-pile electrochemical measurements were performed in order to investigate the effect of radiation on the electrochemical corrosion behaviour of two materials: reduced activation ferritic-martensitic steel EUROFER 97 and stainless steel AISI 316 L(N) IG. The corrosion potential was continuously monitored during the whole irradiation period. At regular intervals and under various flux levels, polarisation resistance measurements and electrochemical impedance spectroscopy were performed. Polarisation curves were recorded at the end of the reactor cycle. Analysis showed that the corrosion potential increased and the polarisation resistance decreased with the flux level. The impedance data showed two semi-circles in the Nyquist diagram which contracted with increasing flux level. A fit of the impedance data yielded a decrease of solution and polarisation resistances with the flux level. The polarisation curves could be fitted with a standard Butler-Volmer representation after correction for the solution resistance and showed an increase in the corrosion current density with the flux level.
Ardura, J; Andrés, J; Aldana, J; Revilla, M A; Cornélissen, G; Halberg, F
1997-09-01
Lighting, noise and temperature were monitored in two perinatal nurseries. Rhythms of several frequencies were found, including prominent 24-hour rhythms with acrophases around 13:00 (light intensity) and 16:00 (noise). For light and noise, the ratio formed by dividing the amplitude of a 1-week (circaseptan) or half-week (circasemiseptan) fitted cosine curve by the amplitude of a 24-hour fitted cosine curve is smaller than unity, since 24-hour rhythms are prominent for these variables. The amplitude ratios are larger than unity for temperature in the newborns' unit but not in the infants' unit. Earlier, the origin of the about-7-day rhythms of neonatal physiologic variables was demonstrated to have, in addition to a major endogenous, also a minor exogenous component. Hence, the possibility of optimizing maturation by manipulating environmental changes can be considered, using, as gauges of development, previously mapped chronomes (time structures of biologic multifrequency rhythms, trends and noise).
Zhu, Mingping; Chen, Aiqing
2017-01-01
This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580
A Multi-year Multi-passband CCD Photometric Study of the W UMa Binary EQ Tauri
NASA Astrophysics Data System (ADS)
Alton, K. B.
2009-12-01
A revised ephemeris and updated orbital period for EQ Tau have been determined from newly acquired (2007-2009) CCD-derived photometric data. A Roche-type model based on the Wilson-Devinney code produced simultaneous theoretical fits of light curve data in three passbands by invoking cold spots on the primary component. These new model fits, along with similar light curve data for EQ Tau collected during the previous six seasons (2000-2006), provided a rare opportunity to follow the seasonal appearance of star spots on a W UMa binary system over nine consecutive years. Fixed values for q, ?1,2, T1, T2, and i based upon the mean of eleven separately determined model fits produced for this system are hereafter proposed for future light curve modeling of EQ Tau. With the exception of the 2001 season all other light curves produced since then required a spotted solution to address the flux asymmetry exhibited by this binary system at Max I and Max II. At least one cold spot on the primary appears in seven out of twelve light curves for EQ Tau produced over the last nine years, whereas in six instances two cold spots on the primary star were invoked to improve the model fit. Solutions using a hot spot were less common and involved positioning a single spot on the primary constituent during the 2001-2002, 2002-2003, and 2005-2006 seasons.
Method and apparatus for air-coupled transducer
NASA Technical Reports Server (NTRS)
Song, Junho (Inventor); Chimenti, Dale E. (Inventor)
2010-01-01
An air-coupled transducer includes a ultrasonic transducer body having a radiation end with a backing fixture at the radiation end. There is a flexible backplate conformingly fit to the backing fixture and a thin membrane (preferably a metallized polymer) conformingly fit to the flexible backplate. In one embodiment, the backing fixture is spherically curved and the flexible backplate is spherically curved. The flexible backplate is preferably patterned with pits or depressions.
Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.
Franco, R; Aran, J M; Canela, E I
1991-01-01
A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914
Water impact analysis of space shuttle solid rocket motor by the finite element method
NASA Technical Reports Server (NTRS)
Buyukozturk, O.; Hibbitt, H. D.; Sorensen, E. P.
1974-01-01
Preliminary analysis showed that the doubly curved triangular shell elements were too stiff for these shell structures. The doubly curved quadrilateral shell elements were found to give much improved results. A total of six load cases were analyzed in this study. The load cases were either those resulting from a static test using reaction straps to simulate the drop conditions or under assumed hydrodynamic conditions resulting from a drop test. The latter hydrodynamic conditions were obtained through an emperical fit of available data. Results obtained from a linear analysis were found to be consistent with results obtained elsewhere with NASTRAN and BOSOR. The nonlinear analysis showed that the originally assumed loads would result in failure of the shell structures. The nonlinear analysis also showed that it was useful to apply internal pressure as a stabilizing influence on collapse. A final analysis with an updated estimate of load conditions resulted in linear behavior up to full load.
Fitting Photometry of Blended Microlensing Events
NASA Astrophysics Data System (ADS)
Thomas, Christian L.; Griest, Kim
2006-03-01
We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Zhou, Wu
2014-01-01
The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846
NASA Astrophysics Data System (ADS)
Salim, Samir; Boquien, Médéric; Lee, Janice C.
2018-05-01
We study the dust attenuation curves of 230,000 individual galaxies in the local universe, ranging from quiescent to intensely star-forming systems, using GALEX, SDSS, and WISE photometry calibrated on the Herschel ATLAS. We use a new method of constraining SED fits with infrared luminosity (SED+LIR fitting), and parameterized attenuation curves determined with the CIGALE SED-fitting code. Attenuation curve slopes and UV bump strengths are reasonably well constrained independently from one another. We find that {A}λ /{A}V attenuation curves exhibit a very wide range of slopes that are on average as steep as the curve slope of the Small Magellanic Cloud (SMC). The slope is a strong function of optical opacity. Opaque galaxies have shallower curves—in agreement with recent radiative transfer models. The dependence of slopes on the opacity produces an apparent dependence on stellar mass: more massive galaxies have shallower slopes. Attenuation curves exhibit a wide range of UV bump amplitudes, from none to Milky Way (MW)-like, with an average strength one-third that of the MW bump. Notably, local analogs of high-redshift galaxies have an average curve that is somewhat steeper than the SMC curve, with a modest UV bump that can be, to first order, ignored, as its effect on the near-UV magnitude is 0.1 mag. Neither the slopes nor the strengths of the UV bump depend on gas-phase metallicity. Functional forms for attenuation laws are presented for normal star-forming galaxies, high-z analogs, and quiescent galaxies. We release the catalog of associated star formation rates and stellar masses (GALEX–SDSS–WISE Legacy Catalog 2).
Analysis of the multigroup model for muon tomography based threat detection
NASA Astrophysics Data System (ADS)
Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.
2014-02-01
We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.
NASA Astrophysics Data System (ADS)
Hajdu, Gergely; Dékány, István; Catelan, Márcio; Grebel, Eva K.; Jurcsik, Johanna
2018-04-01
RR Lyrae variables are widely used tracers of Galactic halo structure and kinematics, but they can also serve to constrain the distribution of the old stellar population in the Galactic bulge. With the aim of improving their near-infrared photometric characterization, we investigate their near-infrared light curves, as well as the empirical relationships between their light curve and metallicities using machine learning methods. We introduce a new, robust method for the estimation of the light-curve shapes, hence the average magnitudes of RR Lyrae variables in the K S band, by utilizing the first few principal components (PCs) as basis vectors, obtained from the PC analysis of a training set of light curves. Furthermore, we use the amplitudes of these PCs to predict the light-curve shape of each star in the J-band, allowing us to precisely determine their average magnitudes (hence colors), even in cases where only one J measurement is available. Finally, we demonstrate that the K S-band light-curve parameters of RR Lyrae variables, together with the period, allow the estimation of the metallicity of individual stars with an accuracy of ∼0.2–0.25 dex, providing valuable chemical information about old stellar populations bearing RR Lyrae variables. The methods presented here can be straightforwardly adopted for other classes of variable stars, bands, or for the estimation of other physical quantities.
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
Observational evidence of dust evolution in galactic extinction curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecchi-Pestellini, Cesare; Casu, Silvia; Mulas, Giacomo
Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds.more » Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.« less
UTM, a universal simulator for lightcurves of transiting systems
NASA Astrophysics Data System (ADS)
Deeg, Hans
2009-02-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. Applications of UTM to date have been mainly in the generation of light-curves for the testing of detection algorithms. For the preparation of such test for the Corot Mission, a special version has been used to generate multicolour light-curves in Corot's passbands. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
Saturn Ring Data Analysis and Thermal Modeling
NASA Technical Reports Server (NTRS)
Dobson, Coleman
2011-01-01
CIRS, VIMS, UVIS, and ISS (Cassini's Composite Infrared Specrtometer, Visual and Infrared Mapping Spectrometer, Ultra Violet Imaging Spectrometer and Imaging Science Subsystem, respectively), have each operated in a multidimensional observation space and have acquired scans of the lit and unlit rings at multiple phase angles. To better understand physical and dynamical ring particle parametric dependence, we co-registered profiles from these three instruments, taken at a wide range of wavelengths, from ultraviolet through the thermal infrared, to associate changes in ring particle temperature with changes in observed brightness, specifically with albedos inferred by ISS, UVIS and VIMS. We work in a parameter space where the solar elevation range is constrained to 12 deg - 14 deg and the chosen radial region is the B3 region of the B ring; this region is the most optically thick region in Saturn's rings. From this compilation of multiple wavelength data, we construct and fit phase curves and color ratios using independent dynamical thermal models for ring structure and overplot Saturn, Saturn ring, and Solar spectra. Analysis of phase curve construction and color ratios reveals thermal emission to fall within the extrema of the ISS bandwidth and a geometrical dependence of reddening on phase angle, respectively. Analysis of spectra reveals Cassini CIRS Saturn spectra dominate Cassini CIRS B3 Ring Spectra from 19 to 1000 microns, while Earth-based B Ring Spectrum dominates Earth-based Saturn Spectrum from 0.4 to 4 microns. From our fits we test out dynamical thermal models; from the phase curves we derive ring albedos and non-lambertian properties of the ring particle surfaces; and from the color ratios we examine multiple scattering within the regolith of ring particles.
Evapotranspiration Controls Imposed by Soil Moisture: A Spatial Analysis across the United States
NASA Astrophysics Data System (ADS)
Rigden, A. J.; Tuttle, S. E.; Salvucci, G.
2014-12-01
We spatially analyze the control over evapotranspiration (ET) imposed by soil moisture across the United States using daily estimates of satellite-derived soil moisture and data-driven ET over a nine-year period (June 2002-June 2011) at 305 locations. The soil moisture data are developed using 0.25-degree resolution satellite observations from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), where the 9-year time series for each 0.25-degree pixel was selected from three potential algorithms (VUA-NASA, U. Montana, & NASA) based on the maximum mutual information between soil moisture and precipitation (Tuttle & Salvucci (2014), Remote Sens Environ, 114: 207-222). The ET data are developed independent of soil moisture using an emergent relationship between the diurnal cycle of the relative humidity profile and ET. The emergent relation is that the vertical variance of the relative humidity profile is less than what would occur for increased or decreased ET rates, suggesting that land-atmosphere feedback processes minimize this variance (Salvucci and Gentine (2013), PNAS, 110(16): 6287-6291). The key advantage of using this approach to estimate ET is that no measurements of surface limiting factors (soil moisture, leaf area, canopy conductance) are required; instead, ET is estimated from meteorological data measured at 305 common weather stations that are approximately uniformly distributed across the United States. The combination of these two independent datasets allows for a unique spatial analysis of the control on ET imposed by the availability of soil moisture. We fit evaporation efficiency curves across the United States at each of the 305 sites during the summertime (May-June-July-August-September). Spatial patterns are visualized by mapping optimal curve fitting coefficients across the Unites States. An analysis of efficiency curves and their spatial patterns will be presented.
The effect of semirigid dressings on below-knee amputations.
MacLean, N; Fick, G H
1994-07-01
The effect of using semirigid dressings (SRDs) on the residual limb of individuals who have had below-knee amputations as a consequence of peripheral vascular disease was investigated, with the primary question being: Does the time to readiness for prosthetic fitting for patients treated with the SRDs differ from that of patients treated with soft dressings? Forty patients entered the study and were alternately assigned to one of two groups. Nineteen patients were assigned to the SRD group, and 21 patients were assigned to the soft dressing group. The time from surgery to readiness for prosthetic fitting was recorded for each patient. Kaplan-Meier survival curves were generated for each group, and the results were analyzed with the log-rank test. There was a difference between the two curves, and an examination of the curves suggests that the expected time to readiness for prosthetic fitting for patients treated with the SRDs would be less than half that of patients treated with soft dressings. The results suggest that a patient may be ready for prosthetic fitting sooner if treated with SRDs instead of soft dressings.
Medvedev, Emile S.; Couch, Vernon A.
2014-01-01
Recently, Euro et al. [Biochem. 47, 3185 (2008)] have reported titration data for seven of nine FeS redox centers of complex I from E. coli. There is a significant uncertainty in the assignment of the titration data. Four of the titration curves were assigned to N1a, N1b, N6b, and N2 centers; one curve either to N3 or N7; one more either to N4 or N5; and the last one denoted Nx could not be assigned at all. In addition, the assignment of the titration data to N6b/N6a pair is also uncertain. In this paper, using our calculated interaction energies [Couch et al. BBA 1787, 1266 (2009)], we perform statistical analysis of these data, considering a variety of possible assignments, find the best fit, and determine the intrinsic redox potentials of the centers. The intrinsic potentials could be determined with uncertainty of less than ±10 mV at 95% confidence level for best fit assignments. We also find that the best agreement between theoretical and experimental titration curves is obtained with the N6b-N2 interaction equal to 71±14 or 96±26 mV depending on the N6b/N6a titration data assignment, which is stronger than was expected and may indicate a close distance of N2 center to the membrane surface. PMID:20513348
Optical Characterization of Paper Aging Based on Laser-Induced Fluorescence (LIF) Spectroscopy.
Zhang, Hao; Wang, Shun; Chang, Keke; Sun, Haifeng; Guo, Qingqian; Ma, Liuzheng; Yang, Yatao; Zou, Caihong; Wang, Ling; Hu, Jiandong
2018-06-01
Paper aging and degradation are growing concerns for those who are responsible for the conservation of documents, archives, and libraries. In this study, the paper aging was investigated using laser-induced fluorescence spectroscopy (LIFS), where the fluorescence properties of 47 paper samples with different ages were explored. The paper exhibits fluorescence in the blue-green spectral region with two peaks at about 448 nm and 480 nm under the excitation of 405 nm laser. Both fluorescence peaks changed in absolute intensities and thus the ratio of peak intensities was also influenced with the increasing ages. By applying principal component analysis (PCA) and k-means clustering algorithm, all 47 paper samples were classified into nine groups based on the differences in paper age. Then the first-derivative fluorescence spectral curves were proposed to figure out the relationship between the spectral characteristic and the paper age, and two quantitative models were established based on the changes of first-derivative spectral peak at 443 nm, where one is an exponential fitting curve with an R-squared value of 0.99 and another is a linear fitting curve with an R-squared value of 0.88. The results demonstrated that the combination of fluorescence spectroscopy and PCA can be used for the classification of paper samples with different ages. Moreover, the first-derivative fluorescence spectral curves can be used to quantitatively evaluate the age-related changes of paper samples.
Combinatorial approach toward high-throughput analysis of direct methanol fuel cells.
Jiang, Rongzhong; Rong, Charles; Chu, Deryn
2005-01-01
A 40-member array of direct methanol fuel cells (with stationary fuel and convective air supplies) was generated by electrically connecting the fuel cells in series. High-throughput analysis of these fuel cells was realized by fast screening of voltages between the two terminals of a fuel cell at constant current discharge. A large number of voltage-current curves (200) were obtained by screening the voltages through multiple small-current steps. Gaussian distribution was used to statistically analyze the large number of experimental data. The standard deviation (sigma) of voltages of these fuel cells increased linearly with discharge current. The voltage-current curves at various fuel concentrations were simulated with an empirical equation of voltage versus current and a linear equation of sigma versus current. The simulated voltage-current curves fitted the experimental data well. With increasing methanol concentration from 0.5 to 4.0 M, the Tafel slope of the voltage-current curves (at sigma=0.0), changed from 28 to 91 mV.dec-1, the cell resistance from 2.91 to 0.18 Omega, and the power output from 3 to 18 mW.cm-2.
ERIC Educational Resources Information Center
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-01-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…
Uncertainty Analysis Principles and Methods
2007-09-01
error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden
The Carnegie Supernova Project I. Analysis of stripped-envelope supernova light curves
NASA Astrophysics Data System (ADS)
Taddia, F.; Stritzinger, M. D.; Bersten, M.; Baron, E.; Burns, C.; Contreras, C.; Holmbo, S.; Hsiao, E. Y.; Morrell, N.; Phillips, M. M.; Sollerman, J.; Suntzeff, N. B.
2018-02-01
Stripped-envelope (SE) supernovae (SNe) include H-poor (Type IIb), H-free (Type Ib), and He-free (Type Ic) events thought to be associated with the deaths of massive stars. The exact nature of their progenitors is a matter of debate with several lines of evidence pointing towards intermediate mass (Minit< 20 M⊙) stars in binary systems, while in other cases they may be linked to single massive Wolf-Rayet stars. Here we present the analysis of the light curves of 34 SE SNe published by the Carnegie Supernova Project (CSP-I) that are unparalleled in terms of photometric accuracy and wavelength range. Light-curve parameters are estimated through the fits of an analytical function and trends are searched for among the resulting fit parameters. Detailed inspection of the dataset suggests a tentative correlation between the peak absolute B-band magnitude and Δm15(B), while the post maximum light curves reveals a correlation between the late-time linear slope and Δm15. Making use of the full set of optical and near-IR photometry, combined with robust host-galaxy extinction corrections, comprehensive bolometric light curves are constructed and compared to both analytic and hydrodynamical models. This analysis finds consistent results among the two different modeling techniques and from the hydrodynamical models we obtained ejecta masses of 1.1-6.2M⊙, 56Ni masses of 0.03-0.35M⊙, and explosion energies (excluding two SNe Ic-BL) of 0.25-3.0 × 1051 erg. Our analysis indicates that adopting κ = 0.07 cm2 g-1 as the mean opacity serves to be a suitable assumption when comparing Arnett-model results to those obtained from hydrodynamical calculations. We also find that adopting He I and O I line velocities to infer the expansion velocity in He-rich and He-poor SNe, respectively, provides ejecta masses relatively similar to those obtained by using the Fe II line velocities, although the use of Fe II as a diagnostic does imply higher explosion energies. The inferred range of ejecta masses are compatible with intermediate mass (MZAMS ≤ 20M⊙) progenitor stars in binary systems for the majority of SE SNe. Furthermore, our hydrodynamical modeling of the bolometric light curves suggests a significant fraction of the sample may have experienced significant mixing of 56Ni, particularly in the case of SNe Ic. Based on observations collected at Las Campanas Observatory.Bolometric light curve tables are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/609/A136
Peng, Rong-fei; He, Jia-yao; Zhang, Zhan-xia
2002-02-01
The performances of a self-constructed visible AOTF spectrophotometer are presented. The wavelength calibration of AOTF1 and AOTF2 are performed with a didymium glass using a fourth-order polynomial curve fitting method. The absolute error of the peak position is usually less than 0.7 nm. Compared with the commercial UV1100 spectrophotometer, the scanning speed of the AOTF spectrophotometer is much more faster, but the resolution depends on the quality of AOTF. The absorption spectra and the calibration curves of copper sulfate and alizarin red obtained with AOTF1(Institute for Silicate, Shanghai China) and AOTF2 (Brimrose U.S.A) respectively are presented. Their corresponding correlation coefficients of the calibration curves are 0.9991 and 0.9990 respectively. Preliminary results show that the self-constructed AOTF spectrophotometer is feasible.
Derivation of error sources for experimentally derived heliostat shapes
NASA Astrophysics Data System (ADS)
Cumpston, Jeff; Coventry, Joe
2017-06-01
Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.
Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark
2014-06-01
The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.
Nakagawa, Yoshihide; Amino, Mari; Inokuchi, Sadaki; Hayashi, Satoshi; Wakabayashi, Tsutomu; Noda, Tatsuya
2017-04-01
Amplitude spectral area (AMSA), an index for analysing ventricular fibrillation (VF) waveforms, is thought to predict the return of spontaneous circulation (ROSC) after electric shocks, but its validity is unconfirmed. We developed an equation to predict ROSC, where the change in AMSA (ΔAMSA) is added to AMSA measured immediately before the first shock (AMSA1). We examine the validity of this equation by comparing it with the conventional AMSA1-only equation. We retrospectively investigated 285 VF patients given prehospital electric shocks by emergency medical services. ΔAMSA was calculated by subtracting AMSA1 from last AMSA immediately before the last prehospital electric shock. Multivariate logistic regression analysis was performed using post-shock ROSC as a dependent variable. Analysis data were subjected to receiver operating characteristic curve analysis, goodness-of-fit testing using a likelihood ratio test, and the bootstrap method. AMSA1 (odds ratio (OR) 1.151, 95% confidence interval (CI) 1.086-1.220) and ΔAMSA (OR 1.289, 95% CI 1.156-1.438) were independent factors influencing ROSC induction by electric shock. Area under the curve (AUC) for predicting ROSC was 0.851 for AMSA1-only and 0.891 for AMSA1+ΔAMSA. Compared with the AMSA1-only equation, the AMSA1+ΔAMSA equation had significantly better goodness-of-fit (likelihood ratio test P<0.001) and showed good fit in the bootstrap method. Post-shock ROSC was accurately predicted by adding ΔAMSA to AMSA1. AMSA-based ROSC prediction enables application of electric shock to only those patients with high probability of ROSC, instead of interrupting chest compressions and delivering unnecessary shocks to patients with low probability of ROSC. Copyright © 2017 Elsevier B.V. All rights reserved.
Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj
2017-01-01
Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.
Nuclear reactor descriptions for space power systems analysis
NASA Technical Reports Server (NTRS)
Mccauley, E. W.; Brown, N. J.
1972-01-01
For the small, high performance reactors required for space electric applications, adequate neutronic analysis is of crucial importance, but in terms of computational time consumed, nuclear calculations probably yield the least amount of detail for mission analysis study. It has been found possible, after generation of only a few designs of a reactor family in elaborate thermomechanical and nuclear detail to use simple curve fitting techniques to assure desired neutronic performance while still performing the thermomechanical analysis in explicit detail. The resulting speed-up in computation time permits a broad detailed examination of constraints by the mission analyst.
NASA Astrophysics Data System (ADS)
Möginger, B.; Kehret, L.; Hausnerova, B.; Steinhaus, J.
2016-05-01
3D-Printing is an efficient method in the field of additive manufacturing. In order to optimize the properties of manufactured parts it is essential to adapt the curing behavior of the resin systems with respect to the requirements. Thus, effects of resin composition, e.g. due to different additives such as thickener and curing agents, on the curing behavior have to be known. As the resin transfers from a liquid to a solid glass the time dependent ion viscosity was measured using DEA with flat IDEX sensors. This allows for a sensitive measurement of resin changes as the ion viscosity changes two to four decades. The investigated resin systems are based on the monomers styrene and HEMA. To account for the effects of copolymerization in the calculation of the reaction kinetics it was assumed that the reaction can be considered as a homo-polymerization having a reaction order n≠1. Then the measured ion viscosity curves are fitted with the solution of the reactions kinetics - the time dependent degree of conversion (DC-function) - for times exceeding the initiation phase representing the primary curing. The measured ion viscosity curves can nicely be fitted with the DC-function and the determined fit parameters distinguish distinctly between the investigated resin compositions.
A nonintrusive temperature measuring system for estimating deep body temperature in bed.
Sim, S Y; Lee, W K; Baek, H J; Park, K S
2012-01-01
Deep body temperature is an important indicator that reflects human being's overall physiological states. Existing deep body temperature monitoring systems are too invasive to apply to awake patients for a long time. Therefore, we proposed a nonintrusive deep body temperature measuring system. To estimate deep body temperature nonintrusively, a dual-heat-flux probe and double-sensor probes were embedded in a neck pillow. When a patient uses the neck pillow to rest, the deep body temperature can be assessed using one of the thermometer probes embedded in the neck pillow. We could estimate deep body temperature in 3 different sleep positions. Also, to reduce the initial response time of dual-heat-flux thermometer which measures body temperature in supine position, we employed the curve-fitting method to one subject. And thereby, we could obtain the deep body temperature in a minute. This result shows the possibility that the system can be used as practical temperature monitoring system with appropriate curve-fitting model. In the next study, we would try to establish a general fitting model that can be applied to all of the subjects. In addition, we are planning to extract meaningful health information such as sleep structure analysis from deep body temperature data which are acquired from this system.
Mattucci, Stephen F E; Cronin, Duane S
2015-01-01
Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
Forgetting Curves: Implications for Connectionist Models
ERIC Educational Resources Information Center
Sikstrom, Sverker
2002-01-01
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…
Nonlinear Growth Models in M"plus" and SAS
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam
2009-01-01
Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…
On the Early-Time Excess Emission in Hydrogen-Poor Superluminous Supernovae
NASA Technical Reports Server (NTRS)
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; De Cia, Annalisa; Perley, Daniel A.; Quimby, Robert M.; Waldman, Roni; Sullivan, Mark; Yan, Lin; Ofek, Eran O.;
2017-01-01
We present the light curves of the hydrogen-poor super-luminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (approximately 10 days) and brightness relative to the main peak (23 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (greater than 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.
On The Early-Time Excess Emission In Hydrogen-Poor Superluminous Supernovae
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; ...
2017-01-18
Here, we present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (~10 days) and brightness relative to the main peak (2-3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration ( > 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of amore » different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less
ON THE EARLY-TIME EXCESS EMISSION IN HYDROGEN-POOR SUPERLUMINOUS SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay
2017-01-20
We present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (∼10 days) and brightness relative to the main peak (2–3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (>30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. Wemore » construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of {sup 56}Ni and {sup 56}Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less
NASA Technical Reports Server (NTRS)
Morabito, D. D.; Skjerve, L.
1995-01-01
This article reports on the analysis of the Ka-band Antenna Performance Experiment tipping-curve data acquired at the DSS-13 research and development beam-waveguide (BWG) antenna. By measuring the operating system temperatures as the antenna is moved form zenith to low-elevation angles and fitting a model to the data, one can obtain information on how well the overall temperature model behaves at zenith and approximate the contribution due to the atmosphere. The atmospheric contribution estimated from the data can be expressed in the form of (1) atmospheric noise temperatures that can provide weather statistic information and be compared against those estimated from other methods and (2) the atmospheric loss factor used to refer efficiency measurements to zero atmosphere. This article reports on an analysis performed on a set of 68 8.4-GHz and 67 32-GHz tipping-curve data sets acquired between December 1993 and May 1995 and compares the results with those inferred from a surface model using input meteorological data and from water vapor radiometer (WVR) data. The general results are that, for a selected subset of tip curves, (1) the BWG tipping-curve atmospheric temperatures are in good agreement with those determined from WVR data (the average difference is 0.06 +/- 0.64 K at 32 GHz) and (2) the surface model average values are biased 3.6 K below those of the BWG and WVR at 32 GHz.
NASA Astrophysics Data System (ADS)
Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed
2016-12-01
The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.
The relationship between offspring size and fitness: integrating theory and empiricism.
Rollinson, Njal; Hutchings, Jeffrey A
2013-02-01
How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.
NASA Astrophysics Data System (ADS)
Lyon, Richard F.
2011-11-01
A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.
The dark matter distribution of NGC 5921
NASA Astrophysics Data System (ADS)
Ali, Israa Abdulqasim Mohammed; Hashim, Norsiah; Abidin, Zamri Zainal
2018-04-01
We used the neutral atomic hydrogen data of the Very Large Array for the spiral galaxy NGC 5921 with z = 0.0045 at the distance of 22.4 Mpc, to investigate the nature of dark matter. The investigation was based on two theories, namely, dark matter and Modified Newtonian Dynamics (MOND). We presented the kinematic analysis of the rotation curve with two models of dark matter, namely, the Burkert and NFW profiles. The results revealed that the NFW halo model can reproduce the observed rotation curve, with χ 2_{red}≈ 1, while the Burkert model is unable to fit the observation data. Therefore, the dark matter density profile of NGC 5921 can be presented as a cuspy halo. We also tried to investigate the observed rotation curve of NGC 5921 with MOND, along with the possible assumption on baryonic matter and distance. We note that MOND is still incapable of mimicking the rotation curve with the observed data of the galaxy.
On the Methodology of Studying Aging in Humans
1961-01-01
prediction of death rates The relation of death rate to age has been extensively studied for over 100 years. As an illustration recent death rates for...log death rates appear to be linear, the simpler Gompertz curve fits closely. While on this subject of the Makeham-Gompertz function, it should be...Makeham-Gompertz curve to 5 year age specific death rates . Each fitting provided estimates of the parameters a, {j, and log c for each of the five year
NASA Astrophysics Data System (ADS)
Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo
2018-03-01
Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.
Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.
Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J
1996-07-01
We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.
GLOBAL ANALYSIS OF KOI-977: SPECTROSCOPY, ASTEROSEISMOLOGY, AND PHASE-CURVE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirano, Teruyuki; Sato, Bun'ei; Kobayashi, Atsushi
2015-01-20
We present a global analysis of KOI-977, one of the planet host candidates detected by Kepler. The Kepler Input Catalog (KIC) reports that KOI-977 is a red giant, for which few close-in planets have been discovered. Our global analysis involves spectroscopic and asteroseismic determinations of stellar parameters (e.g., mass and radius) and radial velocity (RV) measurements. Our analyses reveal that KOI-977 is indeed a red giant, possibly in the red clump, but its estimated radius (≳ 20 R {sub ☉} = 0.093 AU) is much larger than KOI-977.01's orbital distance (∼0.027 AU) estimated from its period (P {sub orb} ∼more » 1.35 days) and host star's mass. RV measurements show a small variation, which also contradicts the amplitude of ellipsoidal variations seen in the light curve folded with KOI-977.01's period. Therefore, we conclude that KOI-977.01 is a false positive, meaning that the red giant, for which we measured the radius and RVs, is different from the object that produces the transit-like signal (i.e., an eclipsing binary). On the basis of this assumption, we also perform a light curve analysis including the modeling of transits/eclipses and phase-curve variations, adopting various values for the dilution factor D, which is defined as the flux ratio between the red giant and eclipsing binary. Fitting the whole folded light curve as well as individual transits in the short cadence data simultaneously, we find that the estimated mass and radius ratios of the eclipsing binary are consistent with those of a solar-type star and a late-type star (e.g., an M dwarf) for D ≳ 20.« less
Physical fitness reference standards in fibromyalgia: The al-Ándalus project.
Álvarez-Gallardo, I C; Carbonell-Baeza, A; Segura-Jiménez, V; Soriano-Maldonado, A; Intemann, T; Aparicio, V A; Estévez-López, F; Camiletti-Moirón, D; Herrador-Colmenero, M; Ruiz, J R; Delgado-Fernández, M; Ortega, F B
2017-11-01
We aimed (1) to report age-specific physical fitness levels in people with fibromyalgia of a representative sample from Andalusia; and (2) to compare the fitness levels of people with fibromyalgia with non-fibromyalgia controls. This cross-sectional study included 468 (21 men) patients with fibromyalgia and 360 (55 men) controls. The fibromyalgia sample was geographically representative from southern Spain. Physical fitness was assessed with the Senior Fitness Test battery plus the handgrip test. We applied the Generalized Additive Model for Location, Scale and Shape to calculate percentile curves for women and fitted mean curves using a linear regression for men. Our results show that people with fibromyalgia reached worse performance in all fitness tests than controls (P < 0.001) in all age ranges (P < 0.001). This study provides a comprehensive description of age-specific physical fitness levels among patients with fibromyalgia and controls in a large sample of patients with fibromyalgia from southern of Spain. Physical fitness levels of people with fibromyalgia from Andalusia are very low in comparison with age-matched healthy controls. This information could be useful to correctly interpret physical fitness assessments and helping health care providers to identify individuals at risk for losing physical independence. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
[Experimental research and analysis on dielectric properties of blood in anemia mice].
Shen, Ben; Liang, Quiyan; Gao, Weiqi; You, Chu; Hong, Mengqi; Ma, Qing
2013-12-01
The conductivity and permittivity of blood in mice were measured by the AC electrical impedance method at frequency range of 0.1-100MHz, and then the changes of the Cole-Cole parameters of dielectric spectra of blood from phenylhydrazine-induced anemia mice were observed by numerical calculation and curve fitting residual analysis of the Cole-Cole equation. The results showed that hematocrit (Hct) of the mice with phenylhydrazine injection was significantly reduced; the permittivity(epsilon) spectroscopy of blood moved to the low insulating region and its permittivity decreased; conductivity (kappa) spectrum curve of blood moved to the high conductivity zone and conductivity increased; the 2nd characteristic frequency was lower than that in the normal group. There was phenylhydrazine dose dependent in the changes of the Cole-Cole parameters of dielectric spectra of blood.
Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges
2009-02-13
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.
NASA Astrophysics Data System (ADS)
Qianxiang, Zhou
2012-07-01
It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.
Nongaussian distribution curve of heterophorias among children.
Letourneau, J E; Giroux, R
1991-02-01
The purpose of this study was to measure the distribution curve of horizontal and vertical phorias among children. Kolmogorov-Smirnov goodness of fit tests showed that these distribution curves were not Gaussian among (N = 2048) 6- to 13-year-old children. The distribution curve of horizontal phoria at far and of vertical phorias at far and at near were leptokurtic; the distribution curve of horizontal phoria at near was platykurtic. No variation of the distribution curve of heterophorias with age was observed. Comparisons of any individual findings with the general distribution curve should take the nonGaussian distribution curve of heterophorias into account.
Direct Analysis of JV-Curves Applied to an Outdoor-Degrading CdTe Module (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, D; Kurtz, S.; Ulbrich, C.
2014-03-01
We present the application of a phenomenological four parameter equation to fit and analyze regularly measured current density-voltage JV curves of a CdTe module during 2.5 years of outdoor operation. The parameters are physically meaningful, i.e. the short circuit current density Jsc, open circuit voltage Voc and differential resistances Rsc, and Roc. For the chosen module, the fill factor FF degradation overweighs the degradation of Jsc and Voc. Interestingly, with outdoor exposure, not only the conductance at short circuit, Gsc, increases but also the Gsc(Jsc)-dependence. This is well explained with an increase in voltage dependent charge carrier collection in CdTe.
On cyclic yield strength in definition of limits for characterisation of fatigue and creep behaviour
NASA Astrophysics Data System (ADS)
Gorash, Yevgen; MacKenzie, Donald
2017-06-01
This study proposes cyclic yield strength as a potential characteristic of safe design for structures operating under fatigue and creep conditions. Cyclic yield strength is defined on a cyclic stress-strain curve, while monotonic yield strength is defined on a monotonic curve. Both values of strengths are identified using a two-step procedure of the experimental stress-strain curves fitting with application of Ramberg-Osgood and Chaboche material models. A typical S-N curve in stress-life approach for fatigue analysis has a distinctive minimum stress lower bound, the fatigue endurance limit. Comparison of cyclic strength and fatigue limit reveals that they are approximately equal. Thus, safe fatigue design is guaranteed in the purely elastic domain defined by the cyclic yielding. A typical long-term strength curve in time-to-failure approach for creep analysis has two inflections corresponding to the cyclic and monotonic strengths. These inflections separate three domains on the long-term strength curve, which are characterised by different creep fracture modes and creep deformation mechanisms. Therefore, safe creep design is guaranteed in the linear creep domain with brittle failure mode defined by the cyclic yielding. These assumptions are confirmed using three structural steels for normal and high-temperature applications. The advantage of using cyclic yield strength for characterisation of fatigue and creep strength is a relatively quick experimental identification. The total duration of cyclic tests for a cyclic stress-strain curve identification is much less than the typical durations of fatigue and creep rupture tests at the stress levels around the cyclic yield strength.
Slime Analysis of Painted Steel Panels Immersed in Biscayne Bay, Miami Beach, Florida.
1981-03-30
site ploratory tests on materials under consideration for water. Since the sample panels are curved to fit the heat exchangers was undertaken to... tested in this program were standard and experimental Navy materials and a selection of proprietary coatings supplied by coinrcial manufacturers. Navy...of marine microbial slime fouling filims. Application Is described to fouling of metal heat exchanger pipe in the Ocean Thermal Enrgy Conversion
UTM: Universal Transit Modeller
NASA Astrophysics Data System (ADS)
Deeg, Hans J.
2014-12-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
Investigation of skin structures based on infrared wave parameter indirect microscopic imaging
NASA Astrophysics Data System (ADS)
Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan
2017-02-01
Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.
Guillermina Socías, María; Van Nieuwenhove, Guido; Murúa, María Gabriela; Willink, Eduardo; Liljesthröm, Gerardo Gustavo
2016-04-01
The soybean stalk weevil, Sternechus subsignatus Boheman 1836 (Coleoptera: Curculionidae), is a very serious soybean pest in the Neotropical region. Both adults and larvae feed on soybean, causing significant yield losses. Adult survival was evaluated during three soybean growing seasons under controlled environmental conditions. A survival analysis was performed using a parametric survival fit approach in order to generate survival curves and obtain information that could help optimize integrated management strategies for this weevil pest. Sex of the weevils, crop season, fortnight in which weevils emerged, and their interaction were studied regarding their effect on adult survival. The results showed that females lived longer than males, but both genders were actually long-lived, reaching 224 and 176 d, respectively. Mean lifetime (l50) was 121.88±4.56 d for females and 89.58±2.72 d for males. Although variations were observed in adult longevities among emergence fortnights and soybean seasons, only in December and January fortnights of the 2007–2008 season and December fortnights of 2009–2010 did the statistically longest and shortest longevities occur, respectively. Survivorship data (lx) of adult females and males were fitted to the Weibull frequency distribution model. The survival curve was type I for both sexes, which indicated that mortality corresponded mostly to old individuals.
FTIR Analysis of Functional Groups in Aerosol Particles
NASA Astrophysics Data System (ADS)
Shokri, S. M.; McKenzie, G.; Dransfield, T. J.
2012-12-01
Secondary organic aerosols (SOA) are suspensions of particulate matter composed of compounds formed from chemical reactions of organic species in the atmosphere. Atmospheric particulate matter can have impacts on climate, the environment and human health. Standardized techniques to analyze the characteristics and composition of complex secondary organic aerosols are necessary to further investigate the formation of SOA and provide a better understanding of the reaction pathways of organic species in the atmosphere. While Aerosol Mass Spectrometry (AMS) can provide detailed information about the elemental composition of a sample, it reveals little about the chemical moieties which make up the particles. This work probes aerosol particles deposited on Teflon filters using FTIR, based on the protocols of Russell, et al. (Journal of Geophysical Research - Atmospheres, 114, 2009) and the spectral fitting algorithm of Takahama, et al (submitted, 2012). To validate the necessary calibration curves for the analysis of complex samples, primary aerosols of key compounds (e.g., citric acid, ammonium sulfate, sodium benzoate) were generated, and the accumulated masses of the aerosol samples were related to their IR absorption intensity. These validated calibration curves were then used to classify and quantify functional groups in SOA samples generated in chamber studies by MIT's Kroll group. The fitting algorithm currently quantifies the following functionalities: alcohols, alkanes, alkenes, amines, aromatics, carbonyls and carboxylic acids.
A JOINT CHANDRA AND SWIFT VIEW OF THE 2015 X-RAY DUST-SCATTERING ECHO OF V404 CYGNI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, S.; Corrales, L.; Neilsen, J.
2016-07-01
We present a combined analysis of the Chandra and Swift observations of the 2015 X-ray echo of V404 Cygni. Using a stacking analysis, we identify eight separate rings in the echo. We reconstruct the soft X-ray light curve of the 2015 June outburst using the high-resolution Chandra images and cross-correlations of the radial intensity profiles, indicating that about 70% of the outburst fluence occurred during the bright flare at the end of the outburst on MJD 57199.8. By deconvolving the intensity profiles with the reconstructed outburst light curve, we show that the rings correspond to eight separate dust concentrations withmore » precise distance determinations. We further show that the column density of the clouds varies significantly across the field of view, with the centroid of most of the clouds shifted toward the Galactic plane, relative to the position of V404 Cyg, invalidating the assumption of uniform cloud column typically made in attempts to constrain dust properties from light echoes. We present a new XSPEC spectral dust-scattering model that calculates the differential dust-scattering cross section for a range of commonly used dust distributions and compositions and use it to jointly fit the entire set of Swift echo data. We find that a standard Mathis–Rumpl–Nordsieck model provides an adequate fit to the ensemble of echo data. The fit is improved by allowing steeper dust distributions, and models with simple silicate and graphite grains are preferred over models with more complex composition.« less
Takehira, Rieko; Momose, Yasunori; Yamamura, Shigeo
2010-10-15
A pattern-fitting procedure using an X-ray diffraction pattern was applied to the quantitative analysis of binary system of crystalline pharmaceuticals in tablets. Orthorhombic crystals of isoniazid (INH) and mannitol (MAN) were used for the analysis. Tablets were prepared under various compression pressures using a direct compression method with various compositions of INH and MAN. Assuming that X-ray diffraction pattern of INH-MAN system consists of diffraction intensities from respective crystals, observed diffraction intensities were fitted to analytic expression based on X-ray diffraction theory and separated into two intensities from INH and MAN crystals by a nonlinear least-squares procedure. After separation, the contents of INH were determined by using the optimized normalization constants for INH and MAN. The correction parameter including all the factors that are beyond experimental control was required for quantitative analysis without calibration curve. The pattern-fitting procedure made it possible to determine crystalline phases in the range of 10-90% (w/w) of the INH contents. Further, certain characteristics of the crystals in the tablets, such as the preferred orientation, size of crystallite, and lattice disorder were determined simultaneously. This method can be adopted to analyze compounds whose crystal structures are known. It is a potentially powerful tool for the quantitative phase analysis and characterization of crystals in tablets and powders using X-ray diffraction patterns. Copyright 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelini, G.; Lanza, E.; Rozza Dionigi, A.
1983-05-01
The measurement of cerebral blood flow (CBF) by the extracranial detection of the radioactivity of /sup 133/Xe injected into an internal carotid artery has proved to be of considerable value for the investigation of cerebral circulation in conscious rabbits. Methods are described for calculating CBF from the curves of clearance of /sup 133/Xe, and include exponential analysis (two-component model), initial slope, and stochastic method. The different methods of curve analysis were compared in order to evaluate the fitness with the theoretical model. The initial slope and stochastic methods, compared with the biexponential model, underestimate the CBF by 35% and 46%more » respectively. Furthermore, the validity of recording the clearance curve for 10 min was tested by comparing these CBF values with those obtained from the whole curve. CBF values calculated with the shortened procedure are overestimated by 17%. A correlation exists between the ''10 min'' CBF values and the CBF calculated from the whole curve; in spite of that, the values are not accurate for limited animal populations or for single animals. The extent of the two main compartments into which the CBF is divided was also measured. There is no correlation between CBF values and the extent of the relative compartment. This fact suggests that these two parameters correspond to different biological entities.« less
Gao, C Q; Yang, J X; Chen, M X; Yan, H C; Wang, X Q
2016-04-01
Two experiments were conducted to fit growth curves, and determine age-related changes in carcass characteristics, organs, serum biochemical parameters, and gene expression of intestinal nutrient transporters in domestic pigeon (Columba livia). In experiment 1, body weight (BW) of 30 pigeons was respectively determined at 1, 3, 7, 14, 21, 28, and 35 days old to fit growth curves and to describe the growth of pigeons. In experiment 2, eighty-four 1-day-old squabs were grouped by weight into 7 groups. On d 1, 3, 7, 14, 21, 28, and 35, twelve birds from each group were randomly selected for slaughter and post-slaughter analysis. The results showed that BW of pigeons increased rapidly from d 1 to d 28 (a 25.7-fold increase), and then had little change until d 35. The Logistic, Gompertz, and Von Bertalanffy functions can all be well fitted with the growth curve of domestic pigeons (R2>0.90) and the Gompertz model showed the highest R2value among the models (R2=0.9997). The equation of Gompertz model was Y=507.72×e-(3.76exp(-0.17t))(Y=BW of pigeon (g); t=time (day)). In addition, breast meat yield (%) increased with age throughout the experiment, whereas the leg meat yield (%) reached to the peak on d 14. Serum total protein, albumin, globulin, and glucose concentration were increased with age, whereas serum uric acid concentration was decreased (P<0.05). Furthermore, the gene expressions of nutrient transporters (y+LAT2, LAT1, B0AT1, PepT1, and NHE2) in jejunum of pigeon were increased with age. The results of correlation analysis showed the gene expressions of B0AT1, PepT1, and NHE2 had positive correlations with BW (0.73
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
VizieR Online Data Catalog: Photometric analysis of contact binaries (Lapasset+ 1996)
NASA Astrophysics Data System (ADS)
Lapasset, E.; Gomez, M.; Farinas, R.
1996-09-01
We present BV light-curve synthetic analyses of three short-period contact (W UMa) binaries: HY Pavonis (P=~0.35days), AW Virginis (P=~0.35days), and BP Velorum (P=~0.26days). Different possible configurations for wide range of the mass ratio were explored in each case making use of the Wilson-Devinney code. The photometric parameters of the systems were determined from the synthetic light-curve solutions that best fit the observations. AW Vir has two components of very similar temperatures and therefore the subtype (A or W) remains undetermined. HY Pav and BP Vel are best modeled by W-type configurations and the asymmetries in the light curves are reproduced by introducing cool spots on the more massive secondary components. Although BP Vel lies in the region of the open cluster Cr 173, its distance modulus, in principle, rules it out as a cluster member. (6 data files).
Direct Measurements of Interplanetary Dust Particles in the Vicinity of Earth
NASA Technical Reports Server (NTRS)
McCracken, C. W.; Alexander, W. M.; Dubin, M.
1961-01-01
The direct measurements made by the Explorer VIII satellite provide the first sound basis for analyzing all available direct measurements of the distribution of interplanetary dust particles. The model average distribution curve established by such an analysis departs significantly from that predicted by the (uncertain) extrapolation of results from meteor observations. A consequence of this difference is that the daily accretion of interplanetary particulate matter by the earth is now considered to be mainly dust particles of the direct measurements range of particle size. Almost all the available direct measurements obtained with microphone systems on rockets, satellites, and spacecraft fit directly on the distribution curve defined by Explorer VIII data. The lack of reliable datum points departing significantly from the model average distribution curve means that available direct measurements show no discernible evidence of an appreciable geocentric concentration of interplanetary dust particles.
NASA Technical Reports Server (NTRS)
Welker, Jean Edward
1991-01-01
Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.
Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan
2018-02-01
Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.
NASA Astrophysics Data System (ADS)
Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan
2018-02-01
Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.
Joosen, Ronny V L; Kodde, Jan; Willems, Leo A J; Ligterink, Wilco; van der Plas, Linus H W; Hilhorst, Henk W M
2010-04-01
Over the past few decades seed physiology research has contributed to many important scientific discoveries and has provided valuable tools for the production of high quality seeds. An important instrument for this type of research is the accurate quantification of germination; however gathering cumulative germination data is a very laborious task that is often prohibitive to the execution of large experiments. In this paper we present the germinator package: a simple, highly cost-efficient and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The germinator package contains three modules: (i) design of experimental setup with various options to replicate and randomize samples; (ii) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (iii) curve fitting of cumulative germination data and the extraction, recap and visualization of the various germination parameters. The curve-fitting module enables analysis of general cumulative germination data and can be used for all plant species. We show that the automatic scoring system works for Arabidopsis thaliana and Brassica spp. seeds, but is likely to be applicable to other species, as well. In this paper we show the accuracy, reproducibility and flexibility of the germinator package. We have successfully applied it to evaluate natural variation for salt tolerance in a large population of recombinant inbred lines and were able to identify several quantitative trait loci for salt tolerance. Germinator is a low-cost package that allows the monitoring of several thousands of germination tests, several times a day by a single person.
An operational modal analysis method in frequency and spatial domain
NASA Astrophysics Data System (ADS)
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Drop shape visualization and contact angle measurement on curved surfaces.
Guilizzoni, Manfredo
2011-12-01
The shape and contact angles of drops on curved surfaces is experimentally investigated. Image processing, spline fitting and numerical integration are used to extract the drop contour in a number of cross-sections. The three-dimensional surfaces which describe the surface-air and drop-air interfaces can be visualized and a simple procedure to determine the equilibrium contact angle starting from measurements on curved surfaces is proposed. Contact angles on flat surfaces serve as a reference term and a procedure to measure them is proposed. Such procedure is not as accurate as the axisymmetric drop shape analysis algorithms, but it has the advantage of requiring only a side view of the drop-surface couple and no further information. It can therefore be used also for fluids with unknown surface tension and there is no need to measure the drop volume. Examples of application of the proposed techniques for distilled water drops on gemstones confirm that they can be useful for drop shape analysis and contact angle measurement on three-dimensional sculptured surfaces. Copyright © 2011 Elsevier Inc. All rights reserved.
Zhang, Sa; Li, Zhou; Xin, Xue-Gang
2017-12-20
To achieve differential diagnosis of normal and malignant gastric tissues based on discrepancies in their dielectric properties using support vector machine. The dielectric properties of normal and malignant gastric tissues at the frequency ranging from 42.58 to 500 MHz were measured by coaxial probe method, and the Cole?Cole model was used to fit the measured data. Receiver?operating characteristic (ROC) curve analysis was used to evaluate the discrimination capability with respect to permittivity, conductivity, and Cole?Cole fitting parameters. Support vector machine was used for discriminating normal and malignant gastric tissues, and the discrimination accuracy was calculated using k?fold cross? The area under the ROC curve was above 0.8 for permittivity at the 5 frequencies at the lower end of the measured frequency range. The combination of the support vector machine with the permittivity at all these 5 frequencies combined achieved the highest discrimination accuracy of 84.38% with a MATLAB runtime of 3.40 s. The support vector machine?assisted diagnosis is feasible for human malignant gastric tissues based on the dielectric properties.
Study of static and dynamic magnetic properties of Fe nanoparticles composited with activated carbon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Satyendra Prakash, E-mail: sppal85@gmail.com; Department of Physical Sciences, Indian Institute of Science Education and Research, Mohali, Knowledge city, Sector81, SAS Nagar, Manauli-140306, Punjab; Kaur, Guratinder
2016-05-23
Nanocomposite of Fe nanoparticles with activated carbon has been synthesized to alter the magnetic spin-spin interaction and hence study the dilution effect on the static and dynamic magnetic properties of the Fe nanoparticle system. Transmission electron microscopic (TEM) image shows the spherical Fe nanoparticles dispersed in carbon matrix with 13.8 nm particle size. Temperature dependent magnetization measurement does not show any blocking temperature at all, right up to the room temperature. Magnetic hysteresis curve, taken at 300 K, shows small value of the coercivity and this small hysteresis indicates the presence of an energy barrier and inherent magnetization dynamics. Langevinmore » function fitting of the hysteresis curve gives almost similar value of particle size as obtained from TEM analysis. Magnetic relaxation data, taken at a temperature of 100 K, were fitted with a combination of two exponentially decaying function. This diluted form of nanoparticle system, which has particles size in the superparamagnetic limit, behaves like a dilute ensemble of superspins with large value of the magnetic anisotropic barrier.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tao; Li, Cheng; Huang, Can
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
NASA Astrophysics Data System (ADS)
Cai, Gaoshen; Wu, Chuanyu; Gao, Zepu; Lang, Lihui; Alexandrov, Sergei
2018-05-01
An elliptical warm/hot sheet bulging test under different temperatures and pressure rates was carried out to predict Al-alloy sheet forming limit during warm/hot sheet hydroforming. Using relevant formulas of ultimate strain to calculate and dispose experimental data, forming limit curves (FLCS) in tension-tension state of strain (TTSS) area are obtained. Combining with the basic experimental data obtained by uniaxial tensile test under the equivalent condition with bulging test, complete forming limit diagrams (FLDS) of Al-alloy are established. Using a quadratic polynomial curve fitting method, material constants of fitting function are calculated and a prediction model equation for sheet metal forming limit is established, by which the corresponding forming limit curves in TTSS area can be obtained. The bulging test and fitting results indicated that the sheet metal FLCS obtained were very accurate. Also, the model equation can be used to instruct warm/hot sheet bulging test.
Ding, Tao; Li, Cheng; Huang, Can; ...
2017-01-09
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Yang, Xiaolan; Hu, Xiaolei; Xu, Bangtian; Wang, Xin; Qin, Jialin; He, Chenxiong; Xie, Yanling; Li, Yuanli; Liu, Lin; Liao, Fei
2014-06-17
A fluorometric titration approach was proposed for the calibration of the quantity of monoclonal antibody (mcAb) via the quench of fluorescence of tryptophan residues. It applied to purified mcAbs recognizing tryptophan-deficient epitopes, haptens nonfluorescent at 340 nm under the excitation at 280 nm, or fluorescent haptens bearing excitation valleys nearby 280 nm and excitation peaks nearby 340 nm to serve as Förster-resonance-energy-transfer (FRET) acceptors of tryptophan. Titration probes were epitopes/haptens themselves or conjugates of nonfluorescent haptens or tryptophan-deficient epitopes with FRET acceptors of tryptophan. Under the excitation at 280 nm, titration curves were recorded as fluorescence specific for the FRET acceptors or for mcAbs at 340 nm. To quantify the binding site of a mcAb, a universal model considering both static and dynamic quench by either type of probes was proposed for fitting to the titration curve. This was easy for fitting to fluorescence specific for the FRET acceptors but encountered nonconvergence for fitting to fluorescence of mcAbs at 340 nm. As a solution, (a) the maximum of the absolute values of first-order derivatives of a titration curve as fluorescence at 340 nm was estimated from the best-fit model for a probe level of zero, and (b) molar quantity of the binding site of the mcAb was estimated via consecutive fitting to the same titration curve by utilizing such a maximum as an approximate of the slope for linear response of fluorescence at 340 nm to quantities of the mcAb. This fluorometric titration approach was proved effective with one mcAb for six-histidine and another for penicillin G.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
Motulsky, Harvey J; Brown, Ronald E
2006-01-01
Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949
A search for periodicity in the x ray spectrum of black hole candidate A0620-00
NASA Technical Reports Server (NTRS)
Clark, George W.; Plaks, Kenneth
1991-01-01
The archived data from the SAS-3 observations of the X-ray nova A0620-00, the best of the stellar blackhole candidates, were exhaustively examined for evidence of variable phenomena correlated with the orbital motion of the binary system of which it is a member. The original analysis of these data was completed before discovery of the binary companion and determination of the orbital period of the system. New interest was drawn to the task of a reexamination of the archive data by the recent discovery of the massive nature of the X-ray source through analysis of the Doppler variations and ellipsoidal light variations of the faint K-star companion by McClintock and Remillard. The archive research, carried out under the supervision of the principal investigator, was the topic of the thesis submitted to the MIT Department of Physics by Kenneth Plaks in partial fulfillment of the requirements for the degree of Master of Science. Plaks' effort was focused on the elimination of fluctuations in the data due to errors in attitude solutions and other extraneous causes. The first products of his work were long-term light curves of the X-ray intensities in the various energy channels as functions of time during the time from outbursts in August 1975 to quiescence approximately 6 months later. These curves, are refined versions of the preliminary results published in 1976 (Matilsky et al. 1976). Smooth exponentials were fitted to these long term light curves to provide the basis for detrending the data, thereby permitting a calculation of residuals derived by subtracting the fitted curve from the data. The residuals were then analyzed by Fourier analysis to search for variations with the period of the binary orbit, namely 7.75 hours. No evidence of an orbital periodicity was found. However, the refined light curve provides a much clearer picture of the outburst and subsequent decay of the X-ray luminosity. In fact, there were two outbursts, each followed by an exponential decay with similar time constants of about 25 days. Previous evidence of a three-oscillation variation with a 7.8 day period were confirmed. Substantial theoretical effort has been devoted to attempts to account for the decay characteristics as the result of the gradual eating up of an accretion disk by a stellar-mass blackhole (e.g., Huang and Wheeler 1989). The improved decay curves will provide significant new constraints on the theoretical analyses.
Revisiting the Energy Budget of WASP-43b: Enhanced Day-Night Heat Transport
NASA Astrophysics Data System (ADS)
Keating, Dylan; Cowan, Nicolas B.
2017-11-01
The large day-night temperature contrast of WASP-43b has so far eluded explanation. We revisit the energy budget of this planet by considering the impact of reflected light on dayside measurements and the physicality of implied nightside temperatures. Previous analyses of the infrared eclipses of WASP-43b have assumed reflected light from the planet is negligible and can be ignored. We develop a phenomenological eclipse model including reflected light, thermal emission, and water absorption, and we use it to fit published Hubble and Spitzer eclipse data. We infer a near-infrared geometric albedo of 24% ± 1% and a cooler dayside temperature of 1483 ± 10 K. Additionally, we perform light curve inversion on the three published orbital phase curves of WASP-43b and find that each suggests unphysical, negative flux on the nightside. By requiring non-negative brightnesses at all longitudes, we correct the unphysical parts of the maps and obtain a much hotter nightside effective temperature of 1076 ± 11 K. The cooler dayside and hotter nightside suggest a heat recirculation efficiency of 51% for WASP-43b, essentially the same as for HD 209458b, another hot Jupiter with nearly the same temperature. Our analysis therefore reaffirms the trend that planets with lower irradiation temperatures have more efficient day-night heat transport. Moreover, we note that (1) reflected light may be significant for many near-IR eclipse measurements of hot Jupiters, and (2) phase curves should be fit with physically possible longitudinal brightness profiles—it is insufficient to only require that the disk-integrated light curve be non-negative.
2013-01-01
Background Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2–3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. Methods OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Results Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as “general level” (FPC1), “time to peak” (FPC2) and “oscillations” (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (−0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (−0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. Conclusions FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy. PMID:23327294
Frøslie, Kathrine Frey; Røislien, Jo; Qvigstad, Elisabeth; Godang, Kristin; Bollerslev, Jens; Voldner, Nanna; Henriksen, Tore; Veierød, Marit B
2013-01-17
Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2-3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as "general level" (FPC1), "time to peak" (FPC2) and "oscillations" (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (-0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (-0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy.
Surface characterization of LDEF carbon fiber/polymer matrix composites
NASA Technical Reports Server (NTRS)
Grammer, Holly L.; Wightman, James P.; Young, Philip R.; Slemp, Wayne S.
1995-01-01
XPS (x-ray photoelectron spectroscopy) and SEM (scanning electron microscopy) analysis of both carbon fiber/epoxy matrix and carbon fiber/polysulfone matrix composites revealed significant changes in the surface composition as a result of exposure to low-earth orbit. The carbon 1s curve fit XPS analysis in conjunction with the SEM photomicrographs revealed significant erosion of the polymer matrix resins by atomic oxygen to expose the carbon fibers of the composite samples. This erosion effect on the composites was seen after 10 months in orbit and was even more obvious after 69 months.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
NASA Technical Reports Server (NTRS)
Shaw, J. H.
1979-01-01
Four papers are presented which discuss the following: information measures in nonlinear experimental design; information in spectra of collision broadened absorption lines; band analysis by spectral curve fitting; and least squares analysis of Voight shaped lines. Abstracts of five research papers on which the author collaborated and which were delivered at the 34th Symposium of Molecular Spectroscopy (Ohio State University, June 1979) are included along with a subroutine for use with BMDP3R to retrieve the parameters of 10 Voight shaped lines.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
Mizuno, Ju; Mohri, Satoshi; Yokoyama, Takeshi; Otsuji, Mikiya; Arita, Hideko; Hanaoka, Kazuo
2017-02-01
Varying temperature affects cardiac systolic and diastolic function and the left ventricular (LV) pressure-time curve (PTC) waveform that includes information about LV inotropism and lusitropism. Our proposed half-logistic (h-L) time constants obtained by fitting using h-L functions for four segmental phases (Phases I-IV) in the isovolumic LV PTC are more useful indices for estimating LV inotropism and lusitropism during contraction and relaxation periods than the mono-exponential (m-E) time constants at normal temperature. In this study, we investigated whether the superiority of the goodness of h-L fits remained even at hypothermia and hyperthermia. Phases I-IV in the isovolumic LV PTCs in eight excised, cross-circulated canine hearts at 33, 36, and 38 °C were analyzed using h-L and m-E functions and the least-squares method. The h-L and m-E time constants for Phases I-IV significantly shortened with increasing temperature. Curve fitting using h-L functions was significantly better than that using m-E functions for Phases I-IV at all temperatures. Therefore, the superiority of the goodness of h-L fit vs. m-E fit remained at all temperatures. As LV inotropic and lusitropic indices, temperature-dependent h-L time constants could be more useful than m-E time constants for Phases I-IV.
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
ERIC Educational Resources Information Center
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delice, S., E-mail: sdelice@metu.edu.tr; Isik, M.; Gasanly, N.M.
2015-10-15
Highlights: • Optical and thermoluminescence properties of Ga{sub 4}S{sub 3}Se crystals were investigated. • Indirect and direct band gap energies were found as 2.39 and 2.53 eV, respectively. • The activation energy of the trap center was determined as 495 meV. - Abstract: Optical and thermoluminescence properties on GaS{sub 0.75}Se{sub 0.25} crystals were investigated in the present work. Transmission and reflection measurements were performed at room temperature in the wavelength range of 400–1000 nm. Analysis revealed the presence of indirect and direct transitions with band gap energies of 2.39 and 2.53 eV, respectively. TL spectra obtained at low temperatures (10–300more » K) exhibited one peak having maximum temperature of 168 K. Observed peak was analyzed using curve fitting, initial rise and peak shape methods to calculate the activation energy of the associated trap center. All applied methods were consistent with the value of 495 meV. Attempt-to-escape-frequency and capture cross section of the trap center were determined using the results of curve fitting. Heating rate dependence studies of the glow curve in the range of 0.4–0.8 K/s resulted with decrease of TL intensity and shift of the peak maximum temperature to higher values.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganeshalingam, Mohan; Li Weidong; Filippenko, Alexei V.
We present BVRI light curves of 165 Type Ia supernovae (SNe Ia) from the Lick Observatory Supernova Search follow-up photometry program from 1998 through 2008. Our light curves are typically well sampled (cadence of 3-4 days) with an average of 21 photometry epochs. We describe our monitoring campaign and the photometry reduction pipeline that we have developed. Comparing our data set to that of Hicken et al., with which we have 69 overlapping supernovae (SNe), we find that as an ensemble the photometry is consistent, with only small overall systematic differences, although individual SNe may differ by as much asmore » 0.1 mag, and occasionally even more. Such disagreement in specific cases can have significant implications for combining future large data sets. We present an analysis of our light curves which includes template fits of light-curve shape parameters useful for calibrating SNe Ia as distance indicators. Assuming the B - V color of SNe Ia at 35 days past maximum light can be presented as the convolution of an intrinsic Gaussian component and a decaying exponential attributed to host-galaxy reddening, we derive an intrinsic scatter of {sigma} = 0.076 {+-} 0.019 mag, consistent with the Lira-Phillips law. This is the first of two papers, the second of which will present a cosmological analysis of the data presented herein.« less
Feasibility of Rapid Multitracer PET Tumor Imaging
NASA Astrophysics Data System (ADS)
Kadrmas, D. J.; Rust, T. C.
2005-10-01
Positron emission tomography (PET) can characterize different aspects of tumor physiology using various tracers. PET scans are usually performed using only one tracer since there is no explicit signal for distinguishing multiple tracers. We tested the feasibility of rapidly imaging multiple PET tracers using dynamic imaging techniques, where the signals from each tracer are separated based upon differences in tracer half-life, kinetics, and distribution. Time-activity curve populations for FDG, acetate, ATSM, and PTSM were simulated using appropriate compartment models, and noisy dual-tracer curves were computed by shifting and adding the single-tracer curves. Single-tracer components were then estimated from dual-tracer data using two methods: principal component analysis (PCA)-based fits of single-tracer components to multitracer data, and parallel multitracer compartment models estimating single-tracer rate parameters from multitracer time-activity curves. The PCA analysis found that there is information content present for separating multitracer data, and that tracer separability depends upon tracer kinetics, injection order and timing. Multitracer compartment modeling recovered rate parameters for individual tracers with good accuracy but somewhat higher statistical uncertainty than single-tracer results when the injection delay was >10 min. These approaches to processing rapid multitracer PET data may potentially provide a new tool for characterizing multiple aspects of tumor physiology in vivo.
HS 0705+6700: a New Eclipsing sdB Binary
NASA Astrophysics Data System (ADS)
Drechsel, H.; Heber, U.; Napiwotzki, R.; Ostensen, R.; Solheim, J.-E.; Deetjen, J.; Schuh, S.
HS 0705+6700 is a newly discovered eclipsing sdB binary system consisting of an sdB primary and a cool secondary main sequence star. CCD photometry obtained in October and November 2000 with the 2.5m Nordic (NOT) telescope (La Palma, Tenerife) in the B passband and with the 2.2m Calar Alto telescope (CAFOS, R filter) yielded eclipse light curves with complete orbital phase coverage at high time resolution. A periodogram analysis of 12 primary minimum times distributed over the time span from October 2000 to March 2001 allowed to derive the following exact period and linear ephemeris: prim. min. = HJD 2451822.759782(22) + 0.09564665(39) ṡ E A total of 15 spectra taken with the 3.5m Calar Alto telescope (TWIN spectrograph) on March 11-12, 2001, were used to establish the radial velocity curve of the primary star (K1 = 85.8 km/s) , and to determine its basic atmospheric parameters (Teff = 29300 K, log g = 5.47). The B and R light curves were solved using our Wilson-Devinney based light curve analysis code MORO (Drechsel et al. 1995, A&A 294, 723). The best fit solution yielded exact system parameters consistent with the spectroscopic results. Detailed results will be published elsewhere (Drechsel et al. 2001, A&A, in preparation).
Ellingson, B M; Sahebjam, S; Kim, H J; Pope, W B; Harris, R J; Woodworth, D C; Lai, A; Nghiemphu, P L; Mason, W P; Cloughesy, T F
2014-04-01
Pre-treatment ADC characteristics have been shown to predict response to bevacizumab in recurrent glioblastoma multiforme. However, no studies have examined whether ADC characteristics are specific to this particular treatment. The purpose of the current study was to determine whether ADC histogram analysis is a bevacizumab-specific or treatment-independent biomarker of treatment response in recurrent glioblastoma multiforme. Eighty-nine bevacizumab-treated and 43 chemotherapy-treated recurrent glioblastoma multiformes never exposed to bevacizumab were included in this study. In all patients, ADC values in contrast-enhancing ROIs from MR imaging examinations performed at the time of recurrence, immediately before commencement of treatment for recurrence, were extracted and the resulting histogram was fitted to a mixed model with a double Gaussian distribution. Mean ADC in the lower Gaussian curve was used as the primary biomarker of interest. The Cox proportional hazards model and log-rank tests were used for survival analysis. Cox multivariate regression analysis accounting for the interaction between bevacizumab- and non-bevacizumab-treated patients suggested that the ability of the lower Gaussian curve to predict survival is dependent on treatment (progression-free survival, P = .045; overall survival, P = .003). Patients with bevacizumab-treated recurrent glioblastoma multiforme with a pretreatment lower Gaussian curve > 1.2 μm(2)/ms had a significantly longer progression-free survival and overall survival compared with bevacizumab-treated patients with a lower Gaussian curve < 1.2 μm(2)/ms. No differences in progression-free survival or overall survival were observed in the chemotherapy-treated cohort. Bevacizumab-treated patients with a mean lower Gaussian curve > 1.2 μm(2)/ms had a significantly longer progression-free survival and overall survival compared with chemotherapy-treated patients. The mean lower Gaussian curve from ADC histogram analysis is a predictive imaging biomarker for bevacizumab-treated, not chemotherapy-treated, recurrent glioblastoma multiforme. Patients with recurrent glioblastoma multiforme with a mean lower Gaussian curve > 1.2 μm(2)/ms have a survival advantage when treated with bevacizumab.
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
NASA Astrophysics Data System (ADS)
Kuki, Ákos; Czifrák, Katalin; Karger-Kocsis, József; Zsuga, Miklós; Kéki, Sándor
2015-02-01
The prediction of shape-memory behavior is essential regarding the design of a smart material for different applications. This paper proposes a simple and quick method for the prediction of shape-memory behavior of amorphous shape memory polymers (SMPs) on the basis of a single dynamic mechanical analysis (DMA) temperature sweep at constant frequency. All the parameters of the constitutive equations for linear viscoelasticity are obtained by fitting the DMA curves. The change with the temperature of the time-temperature superposition shift factor ( a T ) is expressed by the Williams-Landel-Ferry (WLF) model near and above the glass transition temperature ( T g ), and by the Arrhenius law below T g . The constants of the WLF and Arrhenius equations can also be determined. The results of our calculations agree satisfactorily with the experimental free recovery curves from shape-memory tests.
Concentrated photovoltaics system costs and learning curve analysis
NASA Astrophysics Data System (ADS)
Haysom, Joan E.; Jafarieh, Omid; Anis, Hanan; Hinzer, Karin
2013-09-01
An extensive set of costs in /W for the installed costs of CPV systems has been amassed from a range of public sources, including both individual company prices and market reports. Cost reductions over time are very evident, with current prices for 2012 in the range of 3.0 ± 0.7 /W and a predicted cost of 1.5 /W for 2020. Cost data is combined with deployment volumes in a learning curve analysis, providing a fitted learning rate of either 18.5% or 22.3% depending on the methodology. This learning rate is compared to that of PV modules and PV installed systems, and the influence of soft costs is discussed. Finally, if an annual growth rate of 39% is assumed for deployed volumes, then, using the learning rate of 20%, this would predict the achievement of a cost point of 1.5 /W by 2016.
Wyss, Thomas; Boesch, Maria; Roos, Lilian; Tschopp, Céline; Frei, Klaus M; Annen, Hubert; La Marca, Roberto
2016-12-01
Good physical fitness seems to help the individual to buffer the potential harmful impact of psychosocial stress on somatic and mental health. The aim of the present study is to investigate the role of physical fitness levels on the autonomic nervous system (ANS; i.e. heart rate and salivary alpha amylase) responses to acute psychosocial stress, while controlling for established factors influencing individual stress reactions. The Trier Social Stress Test for Groups (TSST-G) was executed with 302 male recruits during their first week of Swiss Army basic training. Heart rate was measured continuously, and salivary alpha amylase was measured twice, before and after the stress intervention. In the same week, all volunteers participated in a physical fitness test and they responded to questionnaires on lifestyle factors and personal traits. A multiple linear regression analysis was conducted to determine ANS responses to acute psychosocial stress from physical fitness test performances, controlling for personal traits, behavioural factors, and socioeconomic data. Multiple linear regression revealed three variables predicting 15 % of the variance in heart rate response (area under the individual heart rate response curve during TSST-G) and four variables predicting 12 % of the variance in salivary alpha amylase response (salivary alpha amylase level immediately after the TSST-G) to acute psychosocial stress. A strong performance at the progressive endurance run (high maximal oxygen consumption) was a significant predictor of ANS response in both models: low area under the heart rate response curve during TSST-G as well as low salivary alpha amylase level after TSST-G. Further, high muscle power, non-smoking, high extraversion, and low agreeableness were predictors of a favourable ANS response in either one of the two dependent variables. Good physical fitness, especially good aerobic endurance capacity, is an important protective factor against health-threatening reactions to acute psychosocial stress.
An astronomer's guide to period searching
NASA Astrophysics Data System (ADS)
Schwarzenberg-Czerny, A.
2003-03-01
We concentrate on analysis of unevenly sampled time series, interrupted by periodic gaps, as often encountered in astronomy. While some of our conclusions may appear surprising, all are based on classical statistical principles of Fisher & successors. Except for discussion of the resolution issues, it is best for the reader to forget temporarily about Fourier transforms and to concentrate on problems of fitting of a time series with a model curve. According to their statistical content we divide the issues into several sections, consisting of: (ii) statistical numerical aspects of model fitting, (iii) evaluation of fitted models as hypotheses testing, (iv) the role of the orthogonal models in signal detection (v) conditions for equivalence of periodograms (vi) rating sensitivity by test power. An experienced observer working with individual objects would benefit little from formalized statistical approach. However, we demonstrate the usefulness of this approach in evaluation of performance of periodograms and in quantitative design of large variability surveys.
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Diagnostic efficiency of an ability-focused battery.
Miller, Justin B; Fichtenberg, Norman L; Millis, Scott R
2010-05-01
An ability-focused battery (AFB) is a selected group of well-validated neuropsychological measures that assess the conventional range of cognitive domains. This study examined the diagnostic efficiency of an AFB for use in clinical decision making with a mixed sample composed of individuals with neurological brain dysfunction and individuals referred for cognitive assessment without evidence of neurological disorders. Using logistic regression analyses and ROC curve analysis, a five-domain model composed of attention, processing speed, visual-spatial reasoning, language/verbal reasoning, and memory domain scores was fitted that had an AUC of.89 (95% CI =.84-.95). A more parsimonious two-domain model using processing speed and memory was also fitted that had an AUC of.90 (95% confidence interval =.84-.95). A model composed of a global ability score calculated from the mean of the individual domain scores was also fitted with an AUC of.88 (95% CI =.82-.94).
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Latitudinal distribution of soft X-ray flares and dispairty in butterfly diagram
NASA Astrophysics Data System (ADS)
Pandey, K. K.; Yellaiah, G.; Hiremath, K. M.
2015-04-01
We present statistical analysis of about 63000 soft X-ray flare (class≥C) observed by geostationary operational environmental satellite (GOES) during the period 1976-2008. Class wise occurrence of soft X-ray (SXR) flare is in declining trend since cycle 21. The distribution pattern of cycle 21 shows the transit of hemispheric dominance of flare activity from northern to southern hemisphere and remains there during cycle 22 and 23. During the three cycles, 0-100, 21-300 latitude belts in southern hemisphere (SH) and 31-400 latitude belt in northern hemisphere (NH) are mightier. The 11-200 latitude belt of both hemisphere is mightiest. Correlation coefficient between consecutive latitude appears to be increasing from equator to poleward in northern hemisphere whereas pole to equatorward in southern hemisphere. Slope of the regression line fitted with asymmetry time series of daily flare counts is negative in all three cycles for different classes of flares. The yearly asymmetry curve fitted by a sinusoidal function varies from 5.6 to 11 years period and depends upon the intensity of flare. Variation, of curve fitted with wings of butterfly diagram, from first to second order polynomial suggests that latitudinal migration of flare activity varies from cycle to cycle, northern to southern hemisphere. The variation in slope of the butterfly wing of different flare class indicates the non uniform migration of flare activity.
Design data for radars based on 13.9 GHz Skylab scattering coefficient measurements
NASA Technical Reports Server (NTRS)
Moore, R. K. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Measurements made at 13.9 GHz with the radar scatterometer on Skylab have been combined to produce median curves of the variation of scattering coefficient with angle of incidence out to 45 deg. Because of the large number of observations, and the large area averaged for each measured data point, these curves may be used as a new design base for radars. A reasonably good fit at larger angles is obtained using the theoretical expression based on an exponential height correlation function and also using Lambert's law. For angles under 10 deg, a different fit based on the exponential correlation function, and a fit based on geometric optics expressions are both reasonably valid.
Separation mechanism of nortriptyline and amytriptyline in RPLC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritti, Fabrice; Guiochon, Georges A
2005-08-01
The single and the competitive equilibrium isotherms of nortriptyline and amytriptyline were acquired by frontal analysis (FA) on the C{sub 18}-bonded discovery column, using a 28/72 (v/v) mixture of acetonitrile and water buffered with phosphate (20 mM, pH 2.70). The adsorption energy distributions (AED) of each compound were calculated from the raw adsorption data. Both the fitting of the adsorption data using multi-linear regression analysis and the AEDs are consistent with a trimodal isotherm model. The single-component isotherm data fit well to the tri-Langmuir isotherm model. The extension to a competitive two-component tri-Langmuir isotherm model based on the best parametersmore » of the single-component isotherms does not account well for the breakthrough curves nor for the overloaded band profiles measured for mixtures of nortriptyline and amytriptyline. However, it was possible to derive adjusted parameters of a competitive tri-Langmuir model based on the fitting of the adsorption data obtained for these mixtures. A very good agreement was then found between the calculated and the experimental overloaded band profiles of all the mixtures injected.« less
Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.
VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T
2017-06-01
The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
FTOOLS: A FITS Data Processing and Analysis Software Package
NASA Astrophysics Data System (ADS)
Blackburn, J. K.
FTOOLS, a highly modular collection of over 110 utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Science Archive Research Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities specific to high energy astrophysics data sets used for the ASCA, ROSAT, GRO, and XTE missions. A core set of FTOOLS providing support for generic FITS data processing, FITS image analysis and timing analysis can easily be split out of the full software package for users not needing the high energy astrophysics mission utilities. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and \\fortran to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia
2002-02-01
A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.
King, Christopher R
2016-11-01
To date neither the optimal radiotherapy dose nor the existence of a dose-response has been established for salvage RT (SRT). A systematic review from 1996 to 2015 and meta-analysis was performed to identify the pathologic, clinical and treatment factors associated with relapse-free survival (RFS) after SRT (uniformly defined as a PSA>0.2ng/mL or rising above post-SRT nadir). A sigmoidal dose-response curve was objectively fitted and a non-parametric statistical test used to determine significance. 71 studies (10,034 patients) satisfied the meta-analysis criteria. SRT dose (p=0.0001), PSA prior to SRT (p=0.0009), ECE+ (p=0.039) and SV+ (p=0.046) had significant associations with RFS. Statistical analyses confirmed the independence of SRT dose-response. Omission of series with ADT did not alter results. Dose-response is well fit by a sigmoidal curve (p=0.0001) with a TCD 50 of 65.8Gy, with a dose of 70Gy achieving 58.4% RFS vs. 38.5% for 60Gy. A 2.0% [95% CI 1.1-3.2] improvement in RFS is achieved for each Gy. The SRT dose-response remarkably parallels that for definitive RT of localized disease. This study provides level 2a evidence for dose-escalated SRT>70Gy. The presence of an SRT dose-response for microscopic disease supports the hypothesis that prostate cancer is inherently radio-resistant. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Callina, Kristina Schmid; Johnson, Sara K; Tirrell, Jonathan M; Batanova, Milena; Weiner, Michelle B; Lerner, Richard M
2017-06-01
There were two purposes of the present research: first, to add to scholarship about a key character virtue, hopeful future expectations; and second, to demonstrate a recent innovation in longitudinal methodology that may be especially useful in enhancing the understanding of the developmental course of hopeful future expectations and other character virtues that have been the focus of recent scholarship in youth development. Burgeoning interest in character development has led to a proliferation of short-term, longitudinal studies on character. These data sets are sometimes limited in their ability to model character development trajectories due to low power or relatively brief time spans assessed. However, the integrative data analysis approach allows researchers to pool raw data across studies in order to fit one model to an aggregated data set. The purpose of this article is to demonstrate the promises and challenges of this new tool for modeling character development. We used data from four studies evaluating youth character strengths in different settings to fit latent growth curve models of hopeful future expectations from participants aged 7 through 26 years. We describe the analytic strategy for pooling the data and modeling the growth curves. Implications for future research are discussed in regard to the advantages of integrative data analysis. Finally, we discuss issues researchers should consider when applying these techniques in their own work.
Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies
NASA Astrophysics Data System (ADS)
Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.
2017-12-01
Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark matter models. We find that the pure baryonic model with fitted M/L ratios falls within the 1σ confidence level for 10 HSB and 2 LSB galaxies, at the price of growing the M/Ls on average by a factor of two, but the fits are inferior compared to the best-fitting dark matter model.
Allen, Christian Harry; Kumar, Achint; Qutob, Sami; Nyiri, Balazs; Chauhan, Vinita; Murugkar, Sangeeta
2018-01-09
Recent findings in populations exposed to ionizing radiation (IR) indicate dose-related lens opacification occurs at much lower doses (<2 Gy) than indicated in radiation protection guidelines. As a result, research efforts are now being directed towards identifying early predictors of lens degeneration resulting in cataractogenesis. In this study, Raman micro-spectroscopy was used to investigate the effects of varying doses of radiation, ranging from 0.01 Gy to 5 Gy, on human lens epithelial (HLE) cells which were chemically fixed 24 h post-irradiation. Raman spectra were acquired from the nucleus and cytoplasm of the HLE cells. Spectra were collected from points in a 3 × 3 grid pattern and then averaged. The raw spectra were preprocessed and principal component analysis followed by linear discriminant analysis was used to discriminate between dose and control for 0.25, 0.5, 2, and 5 Gy. Using leave-one-out cross-validation accuracies of greater than 74% were attained for each dose/control combination. The ultra-low doses 0.01 and 0.05 Gy were included in an analysis of band intensities for Raman bands found to be significant in the linear discrimination, and an induced repair model survival curve was fit to a band-difference-ratio plot of this data, suggesting HLE cells undergo a nonlinear response to low-doses of IR. A survival curve was also fit to clonogenic assay data done on the irradiated HLE cells, showing a similar nonlinear response.
NASA Astrophysics Data System (ADS)
Allen, Christian Harry; Kumar, Achint; Qutob, Sami; Nyiri, Balazs; Chauhan, Vinita; Murugkar, Sangeeta
2018-01-01
Recent findings in populations exposed to ionizing radiation (IR) indicate dose-related lens opacification occurs at much lower doses (<2 Gy) than indicated in radiation protection guidelines. As a result, research efforts are now being directed towards identifying early predictors of lens degeneration resulting in cataractogenesis. In this study, Raman micro-spectroscopy was used to investigate the effects of varying doses of radiation, ranging from 0.01 Gy to 5 Gy, on human lens epithelial (HLE) cells which were chemically fixed 24 h post-irradiation. Raman spectra were acquired from the nucleus and cytoplasm of the HLE cells. Spectra were collected from points in a 3 × 3 grid pattern and then averaged. The raw spectra were preprocessed and principal component analysis followed by linear discriminant analysis was used to discriminate between dose and control for 0.25, 0.5, 2, and 5 Gy. Using leave-one-out cross-validation accuracies of greater than 74% were attained for each dose/control combination. The ultra-low doses 0.01 and 0.05 Gy were included in an analysis of band intensities for Raman bands found to be significant in the linear discrimination, and an induced repair model survival curve was fit to a band-difference-ratio plot of this data, suggesting HLE cells undergo a nonlinear response to low-doses of IR. A survival curve was also fit to clonogenic assay data done on the irradiated HLE cells, showing a similar nonlinear response.
Tokuda, Junichi; Mamata, Hatsuho; Gill, Ritu R; Hata, Nobuhiko; Kikinis, Ron; Padera, Robert F; Lenkinski, Robert E; Sugarbaker, David J; Hatabu, Hiroto
2011-04-01
To investigates the impact of nonrigid motion correction on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in patients with solitary pulmonary nodules (SPNs). Misalignment of focal lesions due to respiratory motion in free-breathing dynamic contrast-enhanced MRI (DCE-MRI) precludes obtaining reliable time-intensity curves, which are crucial for pharmacokinetic analysis for tissue characterization. Single-slice 2D DCE-MRI was obtained in 15 patients. Misalignments of SPNs were corrected using nonrigid B-spline image registration. Pixel-wise pharmacokinetic parameters K(trans) , v(e) , and k(ep) were estimated from both original and motion-corrected DCE-MRI by fitting the two-compartment pharmacokinetic model to the time-intensity curve obtained in each pixel. The "goodness-of-fit" was tested with χ(2) -test in pixel-by-pixel basis to evaluate the reliability of the parameters. The percentages of reliable pixels within the SPNs were compared between the original and motion-corrected DCE-MRI. In addition, the parameters obtained from benign and malignant SPNs were compared. The percentage of reliable pixels in the motion-corrected DCE-MRI was significantly larger than the original DCE-MRI (P = 4 × 10(-7) ). Both K(trans) and k(ep) derived from the motion-corrected DCE-MRI showed significant differences between benign and malignant SPNs (P = 0.024, 0.015). The study demonstrated the impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in SPNs. Copyright © 2011 Wiley-Liss, Inc.
CONFIRMATION OF HOT JUPITER KEPLER-41b VIA PHASE CURVE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quintana, Elisa V.; Rowe, Jason F.; Caldwell, Douglas A.
We present high precision photometry of Kepler-41, a giant planet in a 1.86 day orbit around a G6V star that was recently confirmed through radial velocity measurements. We have developed a new method to confirm giant planets solely from the photometric light curve, and we apply this method herein to Kepler-41 to establish the validity of this technique. We generate a full phase photometric model by including the primary and secondary transits, ellipsoidal variations, Doppler beaming, and reflected/emitted light from the planet. Third light contamination scenarios that can mimic a planetary transit signal are simulated by injecting a full rangemore » of dilution values into the model, and we re-fit each diluted light curve model to the light curve. The resulting constraints on the maximum occultation depth and stellar density combined with stellar evolution models rules out stellar blends and provides a measurement of the planet's mass, size, and temperature. We expect about two dozen Kepler giant planets can be confirmed via this method.« less
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
2013-01-01
Background Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists’ capacity to use these immunoassays to evaluate human clinical trials. Results The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose–response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Conclusions Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. PMID:23631706
Eckels, Josh; Nathe, Cory; Nelson, Elizabeth K; Shoemaker, Sara G; Nostrand, Elizabeth Van; Yates, Nicole L; Ashley, Vicki C; Harris, Linda J; Bollenbeck, Mark; Fong, Youyi; Tomaras, Georgia D; Piehler, Britt
2013-04-30
Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists' capacity to use these immunoassays to evaluate human clinical trials. The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose-response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license.
Mathcad in the Chemistry Curriculum Symbolic Software in the Chemistry Curriculum
NASA Astrophysics Data System (ADS)
Zielinski, Theresa Julia
2000-05-01
Physical chemistry is such a broad discipline that the topics we expect average students to complete in two semesters usually exceed their ability for meaningful learning. Consequently, the number and kind of topics and the efficiency with which students can learn them are important concerns. What topics are essential and what can we do to provide efficient and effective access to those topics? How do we accommodate the fact that students come to upper-division chemistry courses with a variety of nonuniformly distributed skills, a bit of calculus, and some physics studied one or more years before physical chemistry? The critical balance between depth and breadth of learning in courses and curricula may be achieved through appropriate use of technology and especially through the use of symbolic mathematics software. Software programs such as Mathcad, Mathematica, and Maple, however, have learning curves that diminish their effectiveness for novices. There are several ways to address the learning curve conundrum. First, basic instruction in the software provided during laboratory sessions should be followed by requiring laboratory reports that use the software. Second, one should assign weekly homework that requires the software and builds student skills within the discipline and with the software. Third, a complementary method, supported by this column, is to provide students with Mathcad worksheets or templates that focus on one set of related concepts and incorporate a variety of features of the software that they are to use to learn chemistry. In this column we focus on two significant topics for young chemists. The first is curve-fitting and the statistical analysis of the fitting parameters. The second is the analysis of the rotation/vibration spectrum of a diatomic molecule, HCl. A broad spectrum of Mathcad documents exists for teaching chemistry. One collection of 50 documents can be found at http://www.monmouth.edu/~tzielins/mathcad/Lists/index.htm. Another collection of peer-reviewed documents is developing through this column at the JCE Internet Web site, http://jchemed.chem.wisc.edu/JCEWWW/Features/ McadInChem/index.html. With this column we add three peer-reviewed and tested Mathcad documents to the JCE site. In Linear Least-Squares Regression, Sidney H. Young and Andrzej Wierzbicki demonstrate various implicit and explicit methods for determining the slope and intercept of the regression line for experimental data. The document shows how to determine the standard deviation for the slope, the intercept, and the standard deviation of the overall fit. Students are next given the opportunity to examine the confidence level for the fit through the Student's t-test. Examination of the residuals of the fit leads students to explore the possibility of rejecting points in a set of data. The document concludes with a discussion of and practice with adding a quadratic term to create a polynomial fit to a set of data and how to determine if the quadratic term is statistically significant. There is full documentation of the various steps used throughout the exposition of the statistical concepts. Although the statistical methods presented in this worksheet are generally accessible to average physical chemistry students, an instructor would be needed to explain the finer points of the matrix methods used in some sections of the worksheet. The worksheet is accompanied by a set of data for students to use to practice the techniques presented. It would be worthwhile for students to spend one or two laboratory periods learning to use the concepts presented and then to apply them to experimental data they have collected for themselves. Any linear or linearizable data set would be appropriate for use with this Mathcad worksheet. Alternatively, instructors may select sections of the document suited to the skill level of their students and the laboratory tasks at hand. In a second Mathcad document, Non-Linear Least-Squares Regression, Young and Wierzbicki introduce the basic concepts of nonlinear curve-fitting and develop the techniques needed to fit a variety of mathematical functions to experimental data. This approach is especially important when mathematical models for chemical processes cannot be linearized. In Mathcad the Levenberg-Marquardt algorithm is used to determine the best fitting parameters for a particular mathematical model. As in linear least-squares, the goal of the fitting process is to find the values for the fitting parameters that minimize the sum of the squares of the deviations between the data and the mathematical model. Students are asked to determine the fitting parameters, use the Hessian matrix to compute the standard deviation of the fitting parameters, test for the significance of the parameters using Student's t-test, use residual analysis to test for data points to remove, and repeat the calculations for another set of data. The nonlinear least-squares procedure follows closely on the pattern set up for linear least-squares by the same authors (see above). If students master the linear least-squares worksheet content they will be able to master the nonlinear least-squares technique (see also refs 1, 2). In the third document, The Analysis of the Vibrational Spectrum of a Linear Molecule by Richard Schwenz, William Polik, and Sidney Young, the authors build on the concepts presented in the curve fitting worksheets described above. This vibrational analysis document, which supports a classic experiment performed in the physical chemistry laboratory, shows how a Mathcad worksheet can increase the efficiency by which a set of complicated manipulations for data reduction can be made more accessible for students. The increase in efficiency frees up time for students to develop a fuller understanding of the physical chemistry concepts important to the interpretation of spectra and understanding of bond vibrations in general. The analysis of the vibration/rotation spectrum for a linear molecule worksheet builds on the rich literature for this topic (3). Before analyzing their own spectral data, students practice and learn the concepts and methods of the HCl spectral analysis by using the fundamental and first harmonic vibrational frequencies provided by the authors. This approach has a fundamental pedagogical advantage. Most explanations in laboratory texts are very concise and lack mathematical details required by average students. This Mathcad worksheet acts as a tutor; it guides students through the essential concepts for data reduction and lets them focus on learning important spectroscopic concepts. The Mathcad worksheet is amply annotated. Students who have moderate skill with the software and have learned about regression analysis from the curve-fitting worksheets described in this column will be able to complete and understand their analysis of the IR spectrum of HCl. The three Mathcad worksheets described here stretch the physical chemistry curriculum by presenting important topics in forms that students can use with only moderate Mathcad skills. The documents facilitate learning by giving students opportunities to interact with the material in meaningful ways in addition to using the documents as sources of techniques for building their own data-reduction worksheets. However, working through these Mathcad worksheets is not a trivial task for the average student. Support needs to be provided by the instructor to ease students through more advanced mathematical and Mathcad processes. These worksheets raise the question of how much we can ask diligent students to do in one course and how much time they need to spend to master the essential concepts of that course. The Mathcad documents and associated PDF versions are available at the JCE Internet WWW site. The Mathcad documents require Mathcad version 6.0 or higher and the PDF files require Adobe Acrobat. Every effort has been made to make the documents fully compatible across the various Mathcad versions. Users may need to refer to Mathcad manuals for functions that vary with the Mathcad version number. Literature Cited 1. Bevington, P. R. Data Reduction and Error Analysis for the Physical Sciences; McGraw-Hill: New York, 1969. 2. Zielinski, T. J.; Allendoerfer, R. D. J. Chem. Educ. 1997, 74, 1001. 3. Schwenz, R. W.; Polik, W. F. J. Chem. Educ. 1999, 76, 1302.
NASA Astrophysics Data System (ADS)
Dan, Wen-Yan; Di, You-Ying; He, Dong-Hua; Liu, Yu-Pu
2011-02-01
1-Decylammonium hydrochloride was synthesized by the method of liquid phase synthesis. Chemical analysis, elemental analysis, and X-ray single crystal diffraction techniques were applied to characterize its composition and structure. Low-temperature heat capacities of the compounds were measured with a precision automated adiabatic calorimeter over the temperature range from 78 to 380 K. Three solid-solid phase transitions have been observed at the peak temperatures of 307.52 ± 0.13, 325.02 ± 0.19, and 327.26 ± 0.07 K. The molar enthalpies and entropies of three phase transitions were determined based on the analysis of heat capacity curves. Experimental molar heat capacities were fitted to two polynomial equations of the heat capacities as a function of temperature by least square method. Smoothed heat capacities and thermodynamic functions of the compound relative to the standard reference temperature 298.15 K were calculated and tabulated at intervals of 5 K based on the fitted polynomials.
PyFolding: Open-Source Graphing, Simulation, and Analysis of the Biophysical Properties of Proteins.
Lowe, Alan R; Perez-Riba, Albert; Itzhaki, Laura S; Main, Ewan R G
2018-02-06
For many years, curve-fitting software has been heavily utilized to fit simple models to various types of biophysical data. Although such software packages are easy to use for simple functions, they are often expensive and present substantial impediments to applying more complex models or for the analysis of large data sets. One field that is reliant on such data analysis is the thermodynamics and kinetics of protein folding. Over the past decade, increasingly sophisticated analytical models have been generated, but without simple tools to enable routine analysis. Consequently, users have needed to generate their own tools or otherwise find willing collaborators. Here we present PyFolding, a free, open-source, and extensible Python framework for graphing, analysis, and simulation of the biophysical properties of proteins. To demonstrate the utility of PyFolding, we have used it to analyze and model experimental protein folding and thermodynamic data. Examples include: 1) multiphase kinetic folding fitted to linked equations, 2) global fitting of multiple data sets, and 3) analysis of repeat protein thermodynamics with Ising model variants. Moreover, we demonstrate how PyFolding is easily extensible to novel functionality beyond applications in protein folding via the addition of new models. Example scripts to perform these and other operations are supplied with the software, and we encourage users to contribute notebooks and models to create a community resource. Finally, we show that PyFolding can be used in conjunction with Jupyter notebooks as an easy way to share methods and analysis for publication and among research teams. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly
Yao, Jian; Levine, Judah; Weiss, Marc
2015-01-01
The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is “day boundary discontinuity,” which has been studied extensively and can be solved by multiple methods [1–8]. The other category of discontinuity, called “anomaly boundary discontinuity (anomaly-BD),” comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as “jamming”, can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer. PMID:26958451
Numerical scoring for the Classic BILAG index.
Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Isenberg, David A; Gordon, Caroline
2009-12-01
To develop an additive numerical scoring scheme for the Classic BILAG index. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0.
Numerical scoring for the Classic BILAG index
Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N.; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D’Cruz, David; Khamashta, Munther A.; Maddison, Peter; Isenberg, David A.
2009-01-01
Objective. To develop an additive numerical scoring scheme for the Classic BILAG index. Methods. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. Results. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. Conclusions. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0. PMID:19779027
Possible Transit Timing Variations of the TrES-3 Planetary System
NASA Astrophysics Data System (ADS)
Jiang, Ing-Guey; Yeh, Li-Chin; Thakur, Parijat; Wu, Yu-Ting; Chien, Ping; Lin, Yi-Ling; Chen, Hong-Yu; Hu, Juei-Hwa; Sun, Zhao; Ji, Jianghui
2013-03-01
Five newly observed transit light curves of the TrES-3 planetary system are presented. Together with other light-curve data from the literature, 23 transit light curves in total, which cover an overall timescale of 911 epochs, have been analyzed through a standard procedure. From these observational data, the system's orbital parameters are determined and possible transit timing variations (TTVs) are investigated. Given that a null TTV produces a fit with reduced χ2 = 1.52, our results agree with previous work, that TTVs might not exist in these data. However, a one-frequency oscillating TTV model, giving a fit with a reduced χ2 = 0.93, does possess a statistically higher probability. It is thus concluded that future observations and dynamical simulations for this planetary system will be very important.
Fitting the post-keratoplasty cornea with hydrogel lenses.
Katsoulos, Costas; Nick, Vasileiou; Lefteris, Karageorgiadis; Theodore, Mousafeiropoulos
2009-02-01
We report two cases who have undergone penetrating keratoplasty (three eyes total), and who were fitted with hydrogel lenses. In the first case, a 28-year-old male presented with an interest in contact lens fitting. He had undergone corneal transplantation in both eyes, about 5 years ago. After topographies and trial fitting were performed, it was decided to be fitted with reverse geometry hydrogel lenses, due to the globular geometry of the cornea, the resultant instability of RGPs, and personal preference. In the second case, a 26-year-old female who had also penetrating keratoplasty was fitted with a hydrogel toric lens of high cylinder in the right eye. The final hydrogel lenses for the first subject incorporated a custom tricurve design, in which the second curve was steeper than the base curve and the third curve flatter than the second but still steeper than the first. Visual acuity was 6/7.5 RE and a mediocre 6/15 LE (OU 6/7.5). The second subject achieved 6/4.5 acuity RE with the high cylinder hydrogel toric lens. In corneas exhibiting extreme protrusion, such as keratoglobus and some cases after penetrating keratoplasty, curvatures are so extreme and the cornea so globular leading to specific fitting options: sclerals, small diameter RGPs and reverse geometry hydrogel lenses, in order to improve lens and optical stability. In selected cases such as the above, large diameter inverse geometry RGP may be fitted only if the eyelid shape and tension permits so. The first case demonstrates that the option of hydrogel lenses is viable when the patient has no interest in RGPs and in certain cases can improve vision to satisfactory levels. In other cases, graft toricity might be so high that the practitioner will need to employ hydrogel torics with large amounts of cylinder in order to correct vision. In such cases, the patient should be closely monitored in order to avoid complications from hypoxia.
Applications of data compression techniques in modal analysis for on-orbit system identification
NASA Technical Reports Server (NTRS)
Carlin, Robert A.; Saggio, Frank; Garcia, Ephrahim
1992-01-01
Data compression techniques have been investigated for use with modal analysis applications. A redundancy-reduction algorithm was used to compress frequency response functions (FRFs) in order to reduce the amount of disk space necessary to store the data and/or save time in processing it. Tests were performed for both single- and multiple-degree-of-freedom (SDOF and MDOF, respectively) systems, with varying amounts of noise. Analysis was done on both the compressed and uncompressed FRFs using an SDOF Nyquist curve fit as well as the Eigensystem Realization Algorithm. Significant savings were realized with minimal errors incurred by the compression process.
Yaxx: Yet another X-ray extractor
NASA Astrophysics Data System (ADS)
Aldcroft, Tom
2013-06-01
Yaxx is a Perl script that facilitates batch data processing using Perl open source software and commonly available software such as CIAO/Sherpa, S-lang, SAS, and FTOOLS. For Chandra and XMM analysis it includes automated spectral extraction, fitting, and report generation. Yaxx can be run without climbing an extensive learning curve; even so, yaxx is highly configurable and can be customized to support complex analysis. yaxx uses template files and takes full advantage of the unique Sherpa / S-lang environment to make much of the processing user configurable. Although originally developed with an emphasis on X-ray data analysis, yaxx evolved to be a general-purpose pipeline scripting package.
Detailed Uncertainty Analysis for Ares I Ascent Aerodynamics Wind Tunnel Database
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Hanke, Jeremy L.; Walker, Eric L.; Houlden, Heather P.
2008-01-01
A detailed uncertainty analysis for the Ares I ascent aero 6-DOF wind tunnel database is described. While the database itself is determined using only the test results for the latest configuration, the data used for the uncertainty analysis comes from four tests on two different configurations at the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. Four major error sources are considered: (1) systematic errors from the balance calibration curve fits and model + balance installation, (2) run-to-run repeatability, (3) boundary-layer transition fixing, and (4) tunnel-to-tunnel reproducibility.
NASA Astrophysics Data System (ADS)
Su, Ray Kai Leung; Lee, Chien-Liang
2013-06-01
This study presents a seismic fragility analysis and ultimate spectral displacement assessment of regular low-rise masonry infilled (MI) reinforced concrete (RC) buildings using a coefficient-based method. The coefficient-based method does not require a complicated finite element analysis; instead, it is a simplified procedure for assessing the spectral acceleration and displacement of buildings subjected to earthquakes. A regression analysis was first performed to obtain the best-fitting equations for the inter-story drift ratio (IDR) and period shift factor of low-rise MI RC buildings in response to the peak ground acceleration of earthquakes using published results obtained from shaking table tests. Both spectral acceleration- and spectral displacement-based fragility curves under various damage states (in terms of IDR) were then constructed using the coefficient-based method. Finally, the spectral displacements of low-rise MI RC buildings at the ultimate (or nearcollapse) state obtained from this paper and the literature were compared. The simulation results indicate that the fragility curves obtained from this study and other previous work correspond well. Furthermore, most of the spectral displacements of low-rise MI RC buildings at the ultimate state from the literature fall within the bounded spectral displacements predicted by the coefficient-based method.
Tan, Lavinia; Hackenberg, Timothy D
2015-11-01
Pigeons' demand and preference for specific and generalized tokens was examined in a token economy. Pigeons could produce and exchange different colored tokens for food, for water, or for food or water. Token production was measured across three phases, which examined: (1) across-session price increases (typical demand curve method); (2) within-session price increases (progressive-ratio, PR, schedule); and (3) concurrent pairwise choices between the token types. Exponential demand curves were fitted to the response data and accounted for over 90% total variance. Demand curve parameter values, Pmax , Omax and α showed that demand was ordered in the following way: food tokens, generalized tokens, water tokens, both in Phase 1 and in Phase 3. This suggests that the preferences were predictable on the basis of elasticity and response output from the demand analysis. Pmax and Omax values failed to consistently predict breakpoints and peak response rates in the PR schedules in Phase 2, however, suggesting limits on a unitary conception of reinforcer efficacy. The patterns of generalized token production and exchange in Phase 3 suggest that the generalized tokens served as substitutes for the specific food and water tokens. Taken together, the present findings demonstrate the utility of behavioral economic concepts in the analysis of generalized reinforcement. © Society for the Experimental Analysis of Behavior.
Comprehensive Study of Plasma-Wall Sheath Transport Phenomena
2016-10-26
function of the applied thermo-mechanical stress. An experiment was designed to test whether and how the process of plasma erosion might depend on ...of exposed surface, a, b) pretest height and laser image, c, d) post - test height and laser image. For the following analysis, a curve fit of the...normal to the ion beam. However, even with a one -dimensional simulation, features of a similar depth and profile to the post - test surface develop
XAFS Study of Molten ZrCl4 in LiCl-KCl Eutectic
NASA Astrophysics Data System (ADS)
Okamoto, Yoshihiro; Motohashi, Haruhiko
2002-05-01
The local structure of motlen ZrCl4 in LiCl-KCl eutectic was investigated by using an X-ray absorption fine structure (XAFS) of the Zr K-absorption edge. The nearest Zr4+-Cl- distance and coordination number from the curve fitting analysis were (2.51±0.02) Å and 5.9±0.6, respectively. These suggest that a 6-fold coordination (ZrCl6)2- is predominant in the molten mixture.
The complex lightcurve of 1992 NA
NASA Technical Reports Server (NTRS)
Wisniewski, Wieslaw Z.; Harris, A. W.
1994-01-01
Amor asteroid 1992 NA was monitored during three nights at a large phase angle of -65 deg. The lightcurves obtained did not reveal a repeatable curve with two maxima and two minima. However, some features suggested a periodicity with three maxima and three minima. A satisfactory composite lightcurve of this form was obtained by means of an 'eyeball' fit and by Fourier analysis. Individual and composite lightcurves are presented. The observed colors are consistent with the C class.
An independent software system for the analysis of dynamic MR images.
Torheim, G; Lombardi, M; Rinck, P A
1997-01-01
A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.
NASA Astrophysics Data System (ADS)
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-12-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.
Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.
Jiang, Zhixing; Zhang, David; Lu, Guangming
2018-04-19
Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
NASA Astrophysics Data System (ADS)
Katz, Harley; Lelli, Federico; McGaugh, Stacy S.; Di Cintio, Arianna; Brook, Chris B.; Schombert, James M.
2017-04-01
Cosmological N-body simulations predict dark matter (DM) haloes with steep central cusps (e.g. NFW). This contradicts observations of gas kinematics in low-mass galaxies that imply the existence of shallow DM cores. Baryonic processes such as adiabatic contraction and gas outflows can, in principle, alter the initial DM density profile, yet their relative contributions to the halo transformation remain uncertain. Recent high-resolution, cosmological hydrodynamic simulations by Di Cintio et al. (DC14) predict that inner density profiles depend systematically on the ratio of stellar-to-DM mass (M*/Mhalo). Using a Markov Chain Monte Carlo approach, we test the NFW and the M*/Mhalo-dependent DC14 halo models against a sample of 147 galaxy rotation curves from the new Spitzer Photometry and Accurate Rotation Curves data set. These galaxies all have extended H I rotation curves from radio interferometry as well as accurate stellar-mass-density profiles from near-infrared photometry. The DC14 halo profile provides markedly better fits to the data compared to the NFW profile. Unlike NFW, the DC14 halo parameters found in our rotation-curve fits naturally fall within two standard deviations of the mass-concentration relation predicted by Λ cold dark matter (ΛCDM) and the stellar mass-halo mass relation inferred from abundance matching with few outliers. Halo profiles modified by baryonic processes are therefore more consistent with expectations from ΛCDM cosmology and provide better fits to galaxy rotation curves across a wide range of galaxy properties than do halo models that neglect baryonic physics. Our results offer a solution to the decade long cusp-core discrepancy.
NASA Astrophysics Data System (ADS)
Mandel, Kaisey S.; Scolnic, Daniel M.; Shariff, Hikmatali; Foley, Ryan J.; Kirshner, Robert P.
2017-06-01
Conventional Type Ia supernova (SN Ia) cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (M B versus B - V) slope {β }{int} differs from the host galaxy dust law R B , this convolution results in a specific curve of mean extinguished absolute magnitude versus apparent color. The derivative of this curve smoothly transitions from {β }{int} in the blue tail to R B in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope {β }{app} between {β }{int} and R B . We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a data set of SALT2 optical light curve fits of 248 nearby SNe Ia at z< 0.10. The conventional linear fit gives {β }{app}≈ 3. Our model finds {β }{int}=2.3+/- 0.3 and a distinct dust law of {R}B=3.8+/- 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ˜0.10 mag in the tails of the apparent color distribution. Finally, we extend our model to examine the SN Ia luminosity-host mass dependence in terms of intrinsic and dust components.
Long-term creep characterization of Gr. 91 steel by modified creep constitutive equations
NASA Astrophysics Data System (ADS)
Kim, Woo-Gon; Kim, Sung-Ho; Lee, Chan-Bock
2011-06-01
This paper focuses on the long-term creep characterization of Gr. 91 steel using creep constitutive equations. The models of three such equations, a combination of power-law form and omega model (CPO), a combination of exponential form and omega model (CEO), and a combination of logarithmic form and omega model (CLO), which are described as sum decaying primary creep and accelerating tertiary creep, are proposed. A series of creep rupture data was obtained through creep tests with various applied loads at 600 °C. On the basis of the creep data, a nonlinear least-square fitting (NLSF) analysis was carried out to provide the best fit with the experimental data in optimizing the parameter constants of an individual equation. The results of the NLSF analysis showed that in the lower stress regions of 160 MPa (σ/σys <0.65), the CEO model showed a match with the experimental creep data comparable to those of the CPO and CLO models; however, in the higher stress regions of 160 MPa (σ/σy > 0.65), the CPO model showed better agreement than the other two models. It was found that the CEO model was superior to the CPO and CLO models in the modeling of long-term creep curves. Using the CEO model, the long-term creep curves of Gr. 91 steel were numerically characterized, and its creep life was predicted accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaczmarski, Krzysztof; Guiochon, Georges A
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventionalmore » procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.« less
Validation of CRIB II for prediction of mortality in premature babies.
Rastogi, Pallav Kumar; Sreenivas, V; Kumar, Nirmal
2010-02-01
Validation of Clinical Risk Index for Babies (CRIB II) score in predicting the neonatal mortality in preterm neonates < or = 32 weeks gestational age. Prospective cohort study. Tertiary care neonatal unit. 86 consecutively born preterm neonates with gestational age < or = 32 weeks. The five variables related to CRIB II were recorded within the first hour of admission for data analysis. The receiver operating characteristics (ROC) curve was used to check the accuracy of the mortality prediction. HL Goodness of fit test was used to see the discrepancy between observed and expected outcomes. A total of 86 neonates (males 59.6% mean birthweight: 1228 +/- 398 grams; mean gestational age: 28.3 +/- 2.4 weeks) were enrolled in the study, of which 17 (19.8%) left hospital against medical advice (LAMA) before reaching the study end point. Among 69 neonates completing the study, 24 (34.8%) had adverse outcome during hospital stay and 45 (65.2%) had favorable outcome. CRIB II correctly predicted adverse outcome in 90.3% (Hosmer Lemeshow goodness of fit test P=0.6). Area under curve (AUC) for CRIB II was 0.9032. In intention to treat analysis with LAMA cases included as survivors, the mortality prediction was 87%. If these were included as having died then mortality prediction was 83.1%. The CRIB II score was found to be a good predictive instrument for mortality in preterm infants < or = 32 weeks gestation.
NASA Astrophysics Data System (ADS)
Thomas, Christian L.
2006-06-01
Analysis and results (Chapters 2-5) of the full 7 year Macho Project dataset toward the Galactic bulge are presented. A total of 450 high quality, relatively large signal-to-noise ratio, events are found, including several events exhibiting exotic effects, and lensing events on possible Sagittarius dwarf galaxy stars. We examine the problem of blending in our sample and conclude that the subset of red clump giants are minimally blended. Using 42 red clump giant events near the Galactic center we calculate the optical depth toward the Galactic bulge to be t = [Special characters omitted.] × 10 -6 at ( l, b ) = ([Special characters omitted.] ) with a gradient of (1.06 ± 0.71) × 10 -6 deg -1 in latitude, and (0.29±0.43) × 10 -6 deg -1 in longitude, bringing measurements into consistency with the models for the first time. In Chapter 6 we reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g. Wozniak & Paczynski) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific points along the light curve (peak region and wings) of high magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction, and study the importance of non- Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth. In Chapter 7 we present work-in-progress on the possibility of correcting standard candle luminosities for the magnification due to weak lensing. We consider the importance of lenses in different mass ranges and look at the contribution from lenses that could not be observed. We conclude that it may be possible to perform this correction with relatively high precision (1-2%) and discuss possible sources of error and methods of improving our model.
Peripheral absolute threshold spectral sensitivity in retinitis pigmentosa.
Massof, R W; Johnson, M A; Finkelstein, D
1981-01-01
Dark-adapted spectral sensitivities were measured in the peripheral retinas of 38 patients diagnosed as having typical retinitis pigmentosa (RP) and in 3 normal volunteers. The patients included those having autosomal dominant and autosomal recessive inheritance patterns. Results were analysed by comparisons with the CIE standard scotopic spectral visibility function and with Judd's modification of the photopic spectral visibility function, with consideration of contributions from changes in spectral transmission of preretinal media. The data show 3 general patterns. One group of patients had absolute threshold spectral sensitivities that were fit by Judd's photopic visibility curve. Absolute threshold spectral sensitivities for a second group of patients were fit by a normal scotopic spectral visibility curve. The third group of patients had absolute threshold spectral sensitivities that were fit by a combination of scotopic and photopic spectral visibility curves. The autosomal dominant and autosomal recessive modes of inheritance were represented in each group of patients. These data indicate that RP patients have normal rod and/or cone spectral sensitivities, and support the subclassification of patients described previously by Massof and Finkelstein. PMID:7459312
On Correlated-noise Analyses Applied to Exoplanet Light Curves
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison
2017-01-01
Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.
Revisiting the Energy Budget of WASP-43b: Enhanced Day–Night Heat Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keating, Dylan; Cowan, Nicolas B.
The large day–night temperature contrast of WASP-43b has so far eluded explanation. We revisit the energy budget of this planet by considering the impact of reflected light on dayside measurements and the physicality of implied nightside temperatures. Previous analyses of the infrared eclipses of WASP-43b have assumed reflected light from the planet is negligible and can be ignored. We develop a phenomenological eclipse model including reflected light, thermal emission, and water absorption, and we use it to fit published Hubble and Spitzer eclipse data. We infer a near-infrared geometric albedo of 24% ± 1% and a cooler dayside temperature ofmore » 1483 ± 10 K. Additionally, we perform light curve inversion on the three published orbital phase curves of WASP-43b and find that each suggests unphysical, negative flux on the nightside. By requiring non-negative brightnesses at all longitudes, we correct the unphysical parts of the maps and obtain a much hotter nightside effective temperature of 1076 ± 11 K. The cooler dayside and hotter nightside suggest a heat recirculation efficiency of 51% for WASP-43b, essentially the same as for HD 209458b, another hot Jupiter with nearly the same temperature. Our analysis therefore reaffirms the trend that planets with lower irradiation temperatures have more efficient day–night heat transport. Moreover, we note that (1) reflected light may be significant for many near-IR eclipse measurements of hot Jupiters, and (2) phase curves should be fit with physically possible longitudinal brightness profiles—it is insufficient to only require that the disk-integrated light curve be non-negative.« less
1985-05-01
distribution, was evaluation of phase shift through best fit of assumed to be the beam response to the microwave theoretical curves and experimental...vibration sidebands o Acceleration as shown in the lower calculated curve . o High-Temperature Exposure o Thermal Vacuum Two of the curves show actual phase ...conclude that the method to measure the phase noise with spectrum estimation is workable, and it has no principle limitation. From the curve it has been
ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, J.S.; Gordon, R.L.; Lessor, D.L.
1981-08-01
Alternate measurement and data analysis procedures are discussed and compared for the application of reflective Nomarski differential interference contrast microscopy for the determination of surface slopes. The discussion includes the interpretation of a previously reported iterative procedure using the results of a detailed optical model and the presentation of a new procedure based on measured image intensity extrema. Surface slope determinations from these procedures are presented and compared with results from a previously reported curve fit analysis of image intensity data. The accuracy and advantages of the different procedures are discussed.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
Limb-darkening and the structure of the Jovian atmosphere
NASA Technical Reports Server (NTRS)
Newman, W. I.; Sagan, C.
1978-01-01
By observing the transit of various cloud features across the Jovian disk, limb-darkening curves were constructed for three regions in the 4.6 to 5.1 mu cm band. Several models currently employed in describing the radiative or dynamical properties of planetary atmospheres are here examined to understand their implications for limb-darkening. The statistical problem of fitting these models to the observed data is reviewed and methods for applying multiple regression analysis are discussed. Analysis of variance techniques are introduced to test the viability of a given physical process as a cause of the observed limb-darkening.
NASA Astrophysics Data System (ADS)
Nicholl, Matt; Guillochon, James; Berger, Edo
2017-11-01
We use the new Modular Open Source Fitter for Transients to model 38 hydrogen-poor superluminous supernovae (SLSNe). We fit their multicolor light curves with a magnetar spin-down model and present posterior distributions of magnetar and ejecta parameters. The color evolution can be fit with a simple absorbed blackbody. The medians (1σ ranges) for key parameters are spin period 2.4 ms (1.2-4 ms), magnetic field 0.8× {10}14 G (0.2{--}1.8× {10}14 G), ejecta mass 4.8 {M}⊙ (2.2-12.9 {M}⊙ ), and kinetic energy 3.9× {10}51 erg (1.9{--}9.8× {10}51 erg). This significantly narrows the parameter space compared to our uninformed priors, showing that although the magnetar model is flexible, the parameter space relevant to SLSNe is well constrained by existing data. The requirement that the instantaneous engine power is ˜1044 erg at the light-curve peak necessitates either large rotational energy (P < 2 ms), or more commonly that the spin-down and diffusion timescales be well matched. We find no evidence for separate populations of fast- and slow-declining SLSNe, which instead form a continuum in light-curve widths and inferred parameters. Variations in the spectra are explained through differences in spin-down power and photospheric radii at maximum light. We find no significant correlations between model parameters and host galaxy properties. Comparing our posteriors to stellar evolution models, we show that SLSNe require rapidly rotating (fastest 10%) massive stars (≳ 20 {M}⊙ ), which is consistent with their observed rate. High mass, low metallicity, and likely binary interaction all serve to maintain rapid rotation essential for magnetar formation. By reproducing the full set of light curves, our posteriors can inform photometric searches for SLSNe in future surveys.
NASA Astrophysics Data System (ADS)
Losekamm, M. J.; Milde, M.; Pöschl, T.; Greenwald, D.; Paul, S.
2017-02-01
Traditional radiation detectors can either measure the total radiation dose omnidirectionally (dosimeters), or determine the incoming particles characteristics within a narrow field of view (spectrometers). Instantaneous measurements of anisotropic fluxes thus require several detectors, resulting in bulky setups. The Multi-purpose Active-target Particle Telescope (MAPT), employing a new detection principle, is designed to measure particle fluxes omnidirectionally and be simultaneously a dosimeter and spectrometer. It consists of an active core of scintillating fibers whose light output is measured by silicon photomultipliers, and fits into a cube with an edge length of 10 cm. It identifies particles using extended Bragg curve spectroscopy, with sensitivity to charged particles with kinetic energies above 25 MeV. MAPT's unique layout results in a geometrical acceptance of approximately 800 cm2 sr and an angular resolution of less than 6°, which can be improved by track-fitting procedures. In a beam test of a simplified prototype, the energy resolution was found to be less than 1 MeV for protons with energies between 30 and 70 MeV. Possible applications of MAPT include the monitoring of radiation environments in spacecraft and beam monitoring in medical facilities.
NASA Astrophysics Data System (ADS)
Vieira, Daniel; Krems, Roman
2017-04-01
Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolov, M.A.; Nanstad, R.K.
1999-10-01
The current provisions for determination of the upward temperature shift of the lower-bound static fracture toughness curve due to irradiation of reactor pressure vessel steels are based on the assumption that they are the same as the Charpy 41-J shifts as a consequence of irradiation. The objective of this paper is to evaluate this assumption relative to data reported in open publications. Depending on the specific source, different sizes of fracture toughness specimens, procedures of the K{sub Jc} determination, and fitting functions were used. It was anticipated that the scatter might be reduced by using a consistent approach to analyzemore » the published data. A method employing Weibull statistics is applied to analyze original fracture toughness data of unirradiated and irradiated pressure vessel steels. Application of the master curve concept is used to determine shifts of fracture toughness transition curves. A hyperbolic tangent function is used to fit charpy absorbed energy data. The fracture toughness shifts are compared to Charpy impact shifts evaluated with various criteria. Linear regression analysis showed that for weld metals, on average, the fracture toughness shift is the same as the Charpy 41-J temperature shift, while for base metals, on average, the fracture toughness shift at 41 J is 16% greater than the shift of the Charpy 41-J transition temperature, with both correlations having relatively large 95% confidence intervals.« less
NASA Astrophysics Data System (ADS)
Mavroidis, Panayiotis; Lind, Bengt K.; Theodorou, Kyriaki; Laurell, Göran; Fernberg, Jan-Olof; Lefkopoulos, Dimitrios; Kappas, Constantin; Brahme, Anders
2004-08-01
The purpose of this work is to provide some statistical methods for evaluating the predictive strength of radiobiological models and the validity of dose-response parameters for tumour control and normal tissue complications. This is accomplished by associating the expected complication rates, which are calculated using different models, with the clinical follow-up records. These methods are applied to 77 patients who received radiation treatment for head and neck cancer and 85 patients who were treated for arteriovenous malformation (AVM). The three-dimensional dose distribution delivered to esophagus and AVM nidus and the clinical follow-up results were available for each patient. Dose-response parameters derived by a maximum likelihood fitting were used as a reference to evaluate their compatibility with the examined treatment methodologies. The impact of the parameter uncertainties on the dose-response curves is demonstrated. The clinical utilization of the radiobiological parameters is illustrated. The radiobiological models (relative seriality and linear Poisson) and the reference parameters are validated to prove their suitability in reproducing the treatment outcome pattern of the patient material studied (through the probability of finding a worse fit, area under the ROC curve and khgr2 test). The analysis was carried out for the upper 5 cm of the esophagus (proximal esophagus) where all the strictures are formed, and the total volume of AVM. The estimated confidence intervals of the dose-response curves appear to have a significant supporting role on their clinical implementation and use.
Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity
NASA Astrophysics Data System (ADS)
Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md
2017-08-01
This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.
Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO
Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan
2018-01-01
Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983
Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.
Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan
2018-01-01
Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.
NASA Astrophysics Data System (ADS)
Asal, Eren Karsu; Polymeris, George S.; Gultekin, Serdar; Kitis, George
2018-06-01
Thermoluminescence (TL) techniques are very useful in the research of the persistent Luminescence (PL) phosphors research. It gives information about the existence of energy levels within the forbidden band, its activation energy, kinetic order, lifetime etc. The TL glow curve of Sr4Al14O25 :Eu2+,Dy3+ persistent phosphor, consists of two well separated glow peaks. The TL techniques used to evaluate activation energy were the initial rise, prompt isothermal decay (PID) of TL of each peak at elevated temperatures and the glow - curve fitting. The behavior of the PID curves of the two peak is very different. According to the results of the PID procedure and the subsequent data analysis it is suggested that the mechanism behind the low temperature peak is a delocalized transition. On the other hand the mechanism behind the high temperature peak is localized transition involving a tunneling recombination between electron trap and luminescence center.
NASA Astrophysics Data System (ADS)
Chitraningrum, Nidya; Chu, Ting-Yi; Huang, Ping-Tsung; Wen, Ten-Chin; Guo, Tzung-Fang
2018-02-01
We fabricate the phenyl-substituted poly(p-phenylene vinylene) copolymer (super yellow, SY-PPV)-based polymer light-emitting diodes (PLEDs) with different device architectures to modulate the injection of opposite charge carriers and investigate the corresponding magnetoconductance (MC) responses. At the first glance, we find that all PLEDs exhibit the positive MC responses. By applying the mathematical analysis to fit the curves with two empirical equations of a non-Lorentzian and a Lorentzian function, we are able to extract the hidden negative MC component from the positive MC curve. We attribute the growth of the negative MC component to the reduced interaction of the triplet excitons with charges to generate the free charge carriers as modulated by the applied magnetic field, known as the triplet exciton-charge reaction, by analyzing MC responses for PLEDs of the charge-unbalanced and hole-blocking device configurations. The negative MC component causes the broadening of the line shape in MC curves.
Craniofacial Reconstruction Using Rational Cubic Ball Curves
Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan
2015-01-01
This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.
2012-08-01
This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.
Early-Time Observations of the GRB 050319 Optical Transient
NASA Astrophysics Data System (ADS)
Quimby, R. M.; Rykoff, E. S.; Yost, S. A.; Aharonian, F.; Akerlof, C. W.; Alatalo, K.; Ashley, M. C. B.; Göǧüş, E.; Güver, T.; Horns, D.; Kehoe, R. L.; Kιzιloǧlu, Ü.; Mckay, T. A.; Özel, M.; Phillips, A.; Schaefer, B. E.; Smith, D. A.; Swan, H. F.; Vestrand, W. T.; Wheeler, J. C.; Wren, J.
2006-03-01
We present the unfiltered ROTSE-III light curve of the optical transient associated with GRB 050319 beginning 4 s after the cessation of γ-ray activity. We fit a power-law function to the data using the revised trigger time given by Chincarini and coworkers, and a smoothly broken power-law to the data using the original trigger disseminated through the GCN notices. Including the RAPTOR data from Woźniak and coworkers, the best-fit power-law indices are α=-0.854+/-0.014 for the single power-law and α1=-0.364+0.020-0.019, α2=-0.881+0.030-0.031, with a break at tb=418+31-30 s for the smoothly broken fit. We discuss the fit results, with emphasis placed on the importance of knowing the true start time of the optical transient for this multipeaked burst. As Swift continues to provide prompt GRB locations, it becomes more important to answer the question, ``when does the afterglow begin?'' in order to correctly interpret the light curves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX
2015-06-15
Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less
Rotation curve for the Milky Way galaxy in conformal gravity
NASA Astrophysics Data System (ADS)
O'Brien, James G.; Moss, Robert J.
2015-05-01
Galactic rotation curves have proven to be the testing ground for dark matter bounds in galaxies, and our own Milky Way is one of many large spiral galaxies that must follow the same models. Over the last decade, the rotation of the Milky Way galaxy has been studied and extended by many authors. Since the work of conformal gravity has now successfully fit the rotation curves of almost 140 galaxies, we present here the fit to our own Milky Way. However, the Milky Way is not just an ordinary galaxy to append to our list, but instead provides a robust test of a fundamental difference of conformal gravity rotation curves versus standard cold dark matter models. It was shown by Mannheim and O'Brien that in conformal gravity, the presence of a quadratic potential causes the rotation curve to eventually fall off after its flat portion. This effect can currently be seen in only a select few galaxies whose rotation curve is studied well beyond a few multiples of the optical galactic scale length. Due to the recent work of Sofue et al and Kundu et al, the rotation curve of the Milky Way has now been studied to a degree where we can test the predicted fall off in the conformal gravity rotation curve. We find that - like the other galaxies already studied in conformal gravity - we obtain amazing agreement with rotational data and the prediction includes the eventual fall off at large distances from the galactic center.
Cardiorespiratory fitness and nutritional status of schoolchildren: 30-year evolution.
Moraes Ferrari, Gerson Luis de; Bracco, Mario Maia; Matsudo, Victor K Rodrigues; Fisberg, Mauro
2013-01-01
To compare the changes in cardiorespiratory fitness in evaluations performed every ten years since 1978/1980, according to the nutritional status and gender of students in the city of Ilhabela, Brazil. The study is part of the Mixed Longitudinal Project on Growth, Development and Physical Fitness of Ilhabela. The study included 1,291 students of both genders, aged 10 to 11 years old. The study periods were: 1978/1980, 1988/1990, 1998/2000, and 2008/2010. The variables analyzed were: body weight, height, and cardiorespiratory fitness (VO2max - L.min-1 and mL.kg-1.min-1) performed using a submaximal progressive protocol on a cycle ergometer. Individuals were classified as normal weight and overweight according to curves proposed by the World Health Organization of body mass index for age and gender. Analysis of variance (ANOVA) with three factors followed by the Bonferroni method were used to compare the periods. The number of normal weight individuals (61%) was higher than that of overweight. There was a significant decrease in cardiorespiratory fitness in both genders. Among the schoolchildren with normal weight, there was a decrease of 22% in males and 26% in females. In overweight schoolchildren, males showed a decrease of 12.7% and females, of 18%. During a 30-year analysis with reviews every ten years from 1978/1980, there was a significant decrease in cardiorespiratory fitness in schoolchildren of both genders, which cannot be explained by the nutritional status. The decline in cardiorespiratory fitness was greater in individuals with normal weight than in overweight individuals. Copyright © 2013 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
A cloud physics investigation utilizing Skylab data
NASA Technical Reports Server (NTRS)
Alishouse, J.; Jacobowitz, H.; Wark, D. (Principal Investigator)
1975-01-01
The author has identified the following significant results. The Lowtran 2 program, S191 spectral response, and solar spectrum were used to compute the expected absorption by 2.0 micron band for a variety of cloud pressure levels and solar zenith angles. Analysis of the three long wavelength data channels continued in which it was found necessary to impose a minimum radiance criterion. It was also found necessary to modify the computer program to permit the computation of mean values and standard deviations for selected subsets of data on a given tape. A technique for computing the integrated absorption in the A band was devised. The technique normalizes the relative maximum at approximately .78 micron to the solar irradiance curve and then adjusts the relative maximum at approximately .74 micron to fit the solar curve.
a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset
NASA Astrophysics Data System (ADS)
Zhou, Y. K.
2018-05-01
Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.
NASA Astrophysics Data System (ADS)
Martí-Vidal, I.; Marcaide, J. M.; Alberdi, A.; Guirado, J. C.; Pérez-Torres, M. A.; Ros, E.
2011-02-01
We report on a simultaneous modelling of the expansion and radio light curves of the supernova SN1993J. We developed a simulation code capable of generating synthetic expansion and radio light curves of supernovae by taking into consideration the evolution of the expanding shock, magnetic fields, and relativistic electrons, as well as the finite sensitivity of the interferometric arrays used in the observations. Our software successfully fits all the available radio data of SN 1993J with a standard emission model for supernovae, which is extended with some physical considerations, such as an evolution in the opacity of the ejecta material, a radial decline in the magnetic fields within the radiating region, and a changing radial density profile for the circumstellar medium starting from day 3100 after the explosion.
POSSIBLE TRANSIT TIMING VARIATIONS OF THE TrES-3 PLANETARY SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Ing-Guey; Wu, Yu-Ting; Chien, Ping
2013-03-15
Five newly observed transit light curves of the TrES-3 planetary system are presented. Together with other light-curve data from the literature, 23 transit light curves in total, which cover an overall timescale of 911 epochs, have been analyzed through a standard procedure. From these observational data, the system's orbital parameters are determined and possible transit timing variations (TTVs) are investigated. Given that a null TTV produces a fit with reduced {chi}{sup 2} = 1.52, our results agree with previous work, that TTVs might not exist in these data. However, a one-frequency oscillating TTV model, giving a fit with a reducedmore » {chi}{sup 2} = 0.93, does possess a statistically higher probability. It is thus concluded that future observations and dynamical simulations for this planetary system will be very important.« less
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less
Radial dependence of the dark matter distribution in M33
NASA Astrophysics Data System (ADS)
López Fune, E.; Salucci, P.; Corbelli, E.
2017-06-01
The stellar and gaseous mass distributions, as well as the extended rotation curve, in the nearby galaxy M33 are used to derive the radial distribution of dark matter density in the halo and to test cosmological models of galaxy formation and evolution. Two methods are examined to constrain the dark mass density profiles. The first method deals directly with fitting the rotation curve data in the range of galactocentric distances 0.24 ≤ r ≤ 22.72 kpc. Using the results of collisionless Λ cold dark matter numerical simulations, we confirm that the Navarro-Frenkel-White (NFW) dark matter profile provides a better fit to the rotation curve data than the cored Burkert profile (BRK) profile. The second method relies on the local equation of centrifugal equilibrium and on the rotation curve slope. In the aforementioned range of distances, we fit the observed velocity profile, using a function that has a rational dependence on the radius, and we derive the slope of the rotation curve. Then, we infer the effective matter densities. In the radial range 9.53 ≤ r ≤ 22.72 kpc, the uncertainties induced by the luminous matter (stars and gas) become negligible, because the dark matter density dominates, and we can determine locally the radial distribution of dark matter. With this second method, we tested the NFW and BRK dark matter profiles and we can confirm that both profiles are compatible with the data, even though in this case the cored BRK density profile provides a more reasonable value for the baryonic-to-dark matter ratio.
New Risk Curves for NHTSA's Brain Injury Criterion (BrIC): Derivations and Assessments.
Laituri, Tony R; Henry, Scott; Pline, Kevin; Li, Guosong; Frankstein, Michael; Weerappuli, Para
2016-11-01
The National Highway Traffic Safety Administration (NHTSA) recently published a Request for Comments regarding a potential upgrade to the US New Car Assessment Program (US NCAP) - a star-rating program pertaining to vehicle crashworthiness. Therein, NHTSA (a) cited two metrics for assessing head risk: Head Injury Criterion (HIC15) and Brain Injury Criterion (BrIC), and (b) proposed to conduct risk assessment via its risk curves for those metrics, but did not prescribe a specific method for applying them. Recent studies, however, have indicated that the NHTSA risk curves for BrIC significantly overstate field-based head injury rates. Therefore, in the present three-part study, a new set of BrIC-based risk curves was derived, an overarching head risk equation involving risk curves for both BrIC and HIC15 was assessed, and some additional candidatepredictor- variable assessments were conducted. Part 1 pertained to the derivation. Specifically, data were pooled from various sources: Navy volunteers, amateur boxers, professional football players, simple-fall subjects, and racecar drivers. In total, there were 4,501 cases, with brain injury reported in 63. Injury outcomes were approximated on the Abbreviated Injury Scale (AIS). The statistical analysis was conducted subject to ordinal logistic regression analysis (OLR), such that the various levels of brain injury were cast as a function of BrIC. The resulting risk curves, with Goodman Kruksal Gamma=0.83, were significantly different than those from NHTSA. Part 2 pertained to the assessment relative to field data. Two perspectives were considered: "aggregate" (ΔV=0-56 km/h) and "point" (high-speed, regulatory focus). For the aggregate perspective, the new risk curves for BrIC were applied in field models pertaining to belted, mid-size, adult drivers in 11-1 o'clock, full-engagement frontal crashes in the National Automotive Sampling System (NASS, 1993-2014 calendar years). For the point perspective, BrIC data from tests were used. The assessments were conducted for minor, moderate, and serious injury levels for both Newer Vehicles (airbag-fitted) and Older Vehicles (not airbag-fitted). Curve-based injury rates and NASS-based injury rates were compared via average percent difference (AvgPctDiff). The new risk curves demonstrated significantly better fidelity than those from NHTSA. For example, for the aggregate perspective (n=12 assessments), the results were as follows: AvgPctDiff (present risk curves) = +67 versus AvgPctDiff (NHTSA risk curves) = +9378. Part 2 also contained a more comprehensive assessment. Specifically, BrIC-based risk curves were used to estimate brain-related injury probabilities, HIC15-based risk curves from NHTSA were used to estimate bone/other injury probabilities, and the maximum of the two resulting probabilities was used to represent the attendant headinjury probabilities. (Those HIC15-based risk curves yielded AvgPctDiff=+85 for that application.) Subject to the resulting 21 assessments, similar results were observed: AvgPctDiff (present risk curves) = +42 versus AvgPctDiff (NHTSA risk curves) = +5783. Therefore, based on the results from Part 2, if the existing BrIC metric is to be applied by NHTSA in vehicle assessment, we recommend that the corresponding risk curves derived in the present study be considered. Part 3 pertained to the assessment of various other candidate brain-injury metrics. Specifically, Parts 1 and 2 were revisited for HIC15, translation acceleration (TA), rotational acceleration (RA), rotational velocity (RV), and a different rotational brain injury criterion from NHTSA (BRIC). The rank-ordered results for the 21 assessments for each metric were as follows: RA, HIC15, BRIC, TA, BrIC, and RV. Therefore, of the six studied sets of OLR-based risk curves, the set for rotational acceleration demonstrated the best performance relative to NASS.
Data Validation in the Kepler Science Operations Center Pipeline
NASA Technical Reports Server (NTRS)
Wu, Hayley; Twicken, Joseph D.; Tenenbaum, Peter; Clarke, Bruce D.; Li, Jie; Quintana, Elisa V.; Allen, Christopher; Chandrasekaran, Hema; Jenkins, Jon M.; Caldwell, Douglas A.;
2010-01-01
We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed. Keywords: photometry, data validation, Kepler, Earth-size planets
Kumar, Keshav
2017-11-01
Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ordoñez, Antonio J.; Sarajedini, Ata; Yang, Soung-Chul, E-mail: a.ordonez@ufl.edu, E-mail: ata@astro.ufl.edu, E-mail: sczoo@kasi.re.kr
We present the first detailed study of the RR Lyrae variable population in the Local Group dSph/dIrr transition galaxy, Phoenix, using previously obtained HST/WFPC2 observations of the galaxy. We utilize template light curve fitting routines to obtain best fit light curves for RR Lyrae variables in Phoenix. Our technique has identified 78 highly probable RR Lyrae stars (54 ab-type; 24 c-type) with about 40 additional candidates. We find mean periods for the two populations of (P {sub ab}) = 0.60 ± 0.03 days and (P{sub c} ) = 0.353 ± 0.002 days. We use the properties of these light curvesmore » to extract, among other things, a metallicity distribution function for ab-type RR Lyrae. Our analysis yields a mean metallicity of ([Fe/H]) = –1.68 ± 0.06 dex for the RRab stars. From the mean period and metallicity calculated from the ab-type RR Lyrae, we conclude that Phoenix is more likely of intermediate Oosterhoff type; however the morphology of the Bailey diagram for Phoenix RR Lyraes appears similar to that of an Oosterhoff type I system. Using the RRab stars, we also study the chemical enrichment law for Phoenix. We find that our metallicity distribution is reasonably well fitted by a closed-box model. The parameters of this model are compatible with the findings of Hidalgo et al., further supporting the idea that Phoenix appears to have been chemically enriched as a closed-box-like system during the early stage of its formation and evolution.« less
Patching C2n Time Series Data Holes using Principal Component Analysis
2007-01-01
characteristic local scale exponent , regardless of dilation of the length examined. THE HURST PARAMETER There are a slew of methods13 available to...fractal dimension D0, which characterises the roughness of the data, and the Hurst parameter, H , which is a measure of the long range dependence (LRD...estimate H . For simplicity, we have opted to use the well known Hurst –Mandelbrot R/S technique, which is also the most elementary. The fitting curve
Universal approach to analysis of cavitation and liquid-impingement erosion data
NASA Technical Reports Server (NTRS)
Rao, P. V.; Young, S. G.
1982-01-01
Cavitation erosion experimental data was analyzed by using normalization and curve-fitting techniques. Data were taken from experiments on several materials tested in both a rotating disk device and a magnetostriction apparatus. Cumulative average volume loss rate and time data were normalized relative to the peak erosion rate and the time to peak erosion rate, respectively. From this process a universal approach was derived that can include data on specific materials from different test devices for liquid impingement and cavitation erosion studies.
The Determination of Birefringence Dispersion in Nematic Liquid Crystals by Using the S-Transform
NASA Astrophysics Data System (ADS)
Coşkun, E.; Özder, S.; Kocahan, Ö.; Köysal, O.
2007-04-01
Transmittance spectra of 5CB and ZLI-6000 coded nematic liquid crystals were acquired in the 12600-22200 cm-1 region at room temperature. The S-transform was applied to analyze the transmittance signal. Dispersion curves of the birefringence were obtained for 5CB and ZLI-6000 by this analysis and data were fitted to the Cauchy formula whereby the dispersion parameters were extracted. Results are found to be in favorable accordance with the published values.
NASA Astrophysics Data System (ADS)
Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier; Heng, Kevin
2018-04-01
Recently acquired Hubble and Spitzer phase curves of the short-period hot Jupiter WASP-43b make it an ideal target for confronting theory with data. On the observational front, we re-analyze the 3.6 and 4.5 μm Spitzer phase curves and demonstrate that our improved analysis better removes residual red noise due to intra-pixel sensitivity, which leads to greater fluxes emanating from the nightside of WASP-43b, thus reducing the tension between theory and data. On the theoretical front, we construct cloud-free and cloudy atmospheres of WASP-43b using our Global Circulation Model (GCM), THOR, which solves the non-hydrostatic Euler equations (compared to GCMs that typically solve the hydrostatic primitive equations). The cloud-free atmosphere produces a reasonable fit to the dayside emission spectrum. The multi-phase emission spectra constrain the cloud deck to be confined to the nightside and have a finite cloud-top pressure. The multi-wavelength phase curves are naturally consistent with our cloudy atmospheres, except for the 4.5 μm phase curve, which requires the presence of enhanced carbon dioxide in the atmosphere of WASP-43b. Multi-phase emission spectra at higher spectral resolution, as may be obtained using the James Webb Space Telescope, and a reflected-light phase curve at visible wavelengths would further constrain the properties of clouds in WASP-43b.
Elliott, Dawn; Patience, Troy; Boyd, Emily; Hume, Roderick F; Calhoun, Byron C; Napolitano, Peter G; Apodaca, Christina C
2006-06-01
To determine which fetal growth curve provided the best estimates of fetal weight for a cohort of ethnically diverse patients at sea level. The study consisted of a population of 1,729 fetuses examined at sea level between January 1, 1997, and June 30, 2000, at 18 weeks, 28 weeks, and term. Gestational age (GA) based on menstrual dates was confirmed or adjusted by crown-rump length or early second-trimester biometry. Fetal weight was estimated by using biparietal diameter, head circumference, abdominal circumference, and femur length. Our fetal growth curves were analyzed with fourth-order polynomial regression analysis, applying four previously defined formulae for fetal growth. Fetal growth curves for estimated fetal weight demonstrated the expected parabolic shape, which varied according to the formulae used. Our curve best fit the following equation: estimated fetal weight = 4.522 - 0.22 x GA age + 0.25 x GA(2) - 0.001 x GA(3) + 5.248 x 10(-6) x GA(4) (R2 = 0.976). SD increased in concordance with GA. Madigan Army Medical Center serves a racially mixed, culturally diverse, military community with unrestricted access to prenatal care. Determination of the optimal population-appropriate growth curve at the correct GA assists clinicians in identifying fetuses at risk for growth restriction or macrosomia and therefore at risk for increased perinatal morbidity and death.
Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Banerjee, Anjishnu
2015-01-01
Derive lower leg injury risk functions using survival analysis and determine injury reference values (IRV) applicable to human mid-size male and small-size female anthropometries by conducting a meta-analysis of experimental data from different studies under axial impact loading to the foot-ankle-leg complex. Specimen-specific dynamic peak force, age, total body mass, and injury data were obtained from tests conducted by applying the external load to the dorsal surface of the foot of postmortem human subject (PMHS) foot-ankle-leg preparations. Calcaneus and/or tibia injuries, alone or in combination and with/without involvement of adjacent articular complexes, were included in the injury group. Injury and noninjury tests were included. Maximum axial loads recorded by a load cell attached to the proximal end of the preparation were used. Data were analyzed by treating force as the primary variable. Age was considered as the covariate. Data were censored based on the number of tests conducted on each specimen and whether it remained intact or sustained injury; that is, right, left, and interval censoring. The best fits from different distributions were based on the Akaike information criterion; mean and plus and minus 95% confidence intervals were obtained; and normalized confidence interval sizes (quality indices) were determined at 5, 10, 25, and 50% risk levels. The normalization was based on the mean curve. Using human-equivalent age as 45 years, data were normalized and risk curves were developed for the 50th and 5th percentile human size of the dummies. Out of the available 114 tests (76 fracture and 38 no injury) from 5 groups of experiments, survival analysis was carried out using 3 groups consisting of 62 tests (35 fracture and 27 no injury). Peak forces associated with 4 specific risk levels at 25, 45, and 65 years of age are given along with probability curves (mean and plus and minus 95% confidence intervals) for PMHS and normalized data applicable to male and female dummies. Quality indices increased (less tightness-of-fit) with decreasing age and risk level for all age groups and these data are given for all chosen risk levels. These PMHS-based probability distributions at different ages using information from different groups of researchers constituting the largest body of data can be used as human tolerances to lower leg injury from axial loading. Decreasing quality indices (increasing index value) at lower probabilities suggest the need for additional tests. The anthropometry-specific mid-size male and small-size female mean human risk curves along with plus and minus 95% confidence intervals from survival analysis and associated IRV data can be used as a first step in studies aimed at advancing occupant safety in automotive and other environments.
Middendorf, Thomas R.
2017-01-01
A critical but often overlooked question in the study of ligands binding to proteins is whether the parameters obtained from analyzing binding data are practically identifiable (PI), i.e., whether the estimates obtained from fitting models to noisy data are accurate and unique. Here we report a general approach to assess and understand binding parameter identifiability, which provides a toolkit to assist experimentalists in the design of binding studies and in the analysis of binding data. The partial fraction (PF) expansion technique is used to decompose binding curves for proteins with n ligand-binding sites exactly and uniquely into n components, each of which has the form of a one-site binding curve. The association constants of the PF component curves, being the roots of an n-th order polynomial, may be real or complex. We demonstrate a fundamental connection between binding parameter identifiability and the nature of these one-site association constants: all binding parameters are identifiable if the constants are all real and distinct; otherwise, at least some of the parameters are not identifiable. The theory is used to construct identifiability maps from which the practical identifiability of binding parameters for any two-, three-, or four-site binding curve can be assessed. Instructions for extending the method to generate identifiability maps for proteins with more than four binding sites are also given. Further analysis of the identifiability maps leads to the simple rule that the maximum number of structurally identifiable binding parameters (shown in the previous paper to be equal to n) will also be PI only if the binding curve line shape contains n resolved components. PMID:27993951
Li, Xiaogai; von Holst, Hans; Kleiven, Svein
2013-01-01
A 3D finite element (FE) model has been developed to study the mean intracranial pressure (ICP) response during constant-rate infusion using linear poroelasticity. Due to the uncertainties in the poroelastic constants for brain tissue, the influence of each of the main parameters on the transient ICP infusion curve was studied. As a prerequisite for transient analysis, steady-state simulations were performed first. The simulated steady-state pressure distribution in the brain tissue for a normal cerebrospinal fluid (CSF) circulation system showed good correlation with experiments from the literature. Furthermore, steady-state ICP closely followed the infusion experiments at different infusion rates. The verified steady-state models then served as a baseline for the subsequent transient models. For transient analysis, the simulated ICP shows a similar tendency to that found in the experiments, however, different values of the poroelastic constants have a significant effect on the infusion curve. The influence of the main poroelastic parameters including the Biot coefficient α, Skempton coefficient B, drained Young's modulus E, Poisson's ratio ν, permeability κ, CSF absorption conductance C(b) and external venous pressure p(b) was studied to investigate the influence on the pressure response. It was found that the value of the specific storage term S(ε) is the dominant factor that influences the infusion curve, and the drained Young's modulus E was identified as the dominant parameter second to S(ε). Based on the simulated infusion curves from the FE model, artificial neural network (ANN) was used to find an optimised parameter set that best fit the experimental curve. The infusion curves from both the FE simulation and using ANN confirmed the limitation of linear poroelasticity in modelling the transient constant-rate infusion.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Bianchi, Marco
2018-03-01
Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 < αPL < 4). The PL exponent tends to lower values as the tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple mechanistic upscaling model based on the PLCO formulation is able to predict the ensemble of BTCs from the stochastic transport simulations without the need of any fitted parameters. The model embeds the constant αCO = 1 and relies on a stratified description of the transport mechanisms to estimate λ. The PL fails to reproduce the ensemble of BTCs at late time, while the LOG model provides consistent results as the PLCO model, however without a clear mechanistic link between physical properties and model parameters. It is concluded that, while all parametric models may work equally well (or equally wrong) for the empirical fitting of the experimental BTCs tails due to the effects of subsampling, for predictive purposes this is not true. A careful selection of the proper heavily tailed models and corresponding parameters is required to ensure physically-based transport predictions.
The mass of the black hole in 1A 0620-00, revisiting the ellipsoidal light curve modelling
NASA Astrophysics Data System (ADS)
van Grunsven, Theo F. J.; Jonker, Peter G.; Verbunt, Frank W. M.; Robinson, Edward L.
2017-12-01
The mass distribution of stellar-mass black holes can provide important clues to supernova modelling, but observationally it is still ill constrained. Therefore, it is of importance to make black hole mass measurements as accurate as possible. The X-ray transient 1A 0620-00 is well studied, with a published black hole mass of 6.61 ± 0.25 M⊙, based on an orbital inclination i of 51.0° ± 0.9°. This was obtained by Cantrell et al. (2010) as an average of independent fits to V-, I- and H-band light curves. In this work, we perform an independent check on the value of i by re-analysing existing YALO/SMARTS V-, I- and H-band photometry, using different modelling software and fitting strategy. Performing a fit to the three light curves simultaneously, we obtain a value for i of 54.1° ± 1.1°, resulting in a black hole mass of 5.86 ± 0.24 M⊙. Applying the same model to the light curves individually, we obtain 58.2° ± 1.9°, 53.6° ± 1.6° and 50.5° ± 2.2° for V-, I- and H-band, respectively, where the differences in best-fitting i are caused by the contribution of the residual accretion disc light in the three different bands. We conclude that the mass determination of this black hole may still be subject to systematic effects exceeding the statistical uncertainty. Obtaining more accurate masses would be greatly helped by continuous phase-resolved spectroscopic observations simultaneous with photometry.
NASA Astrophysics Data System (ADS)
Swensson, Richard G.; King, Jill L.; Good, Walter F.; Gur, David
2000-04-01
A constrained ROC formulation from probability summation is proposed for measuring observer performance in detecting abnormal findings on medical images. This assumes the observer's detection or rating decision on each image is determined by a latent variable that characterizes the specific finding (type and location) considered most likely to be a target abnormality. For positive cases, this 'maximum- suspicion' variable is assumed to be either the value for the actual target or for the most suspicious non-target finding, whichever is the greater (more suspicious). Unlike the usual ROC formulation, this constrained formulation guarantees a 'well-behaved' ROC curve that always equals or exceeds chance- level decisions and cannot exhibit an upward 'hook.' Its estimated parameters specify the accuracy for separating positive from negative cases, and they also predict accuracy in locating or identifying the actual abnormal findings. The present maximum-likelihood procedure (runs on PC with Windows 95 or NT) fits this constrained formulation to rating-ROC data using normal distributions with two free parameters. Fits of the conventional and constrained ROC formulations are compared for continuous and discrete-scale ratings of chest films in a variety of detection problems, both for localized lesions (nodules, rib fractures) and for diffuse abnormalities (interstitial disease, infiltrates or pnumothorax). The two fitted ROC curves are nearly identical unless the conventional ROC has an ill behaved 'hook,' below the constrained ROC.
NASA Astrophysics Data System (ADS)
Milani, G.; Milani, F.
A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.
Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggieri, Ruggero, E-mail: ruggieri.ruggero@gmail.com; Stavreva, Nadejda; Naccarato, Stefania
2012-08-01
Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as duemore » to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.« less
Annual variation in the atmospheric radon concentration in Japan.
Kobayashi, Yuka; Yasuoka, Yumi; Omori, Yasutaka; Nagahama, Hiroyuki; Sanada, Tetsuya; Muto, Jun; Suzuki, Toshiyuki; Homma, Yoshimi; Ihara, Hayato; Kubota, Kazuhito; Mukai, Takahiro
2015-08-01
Anomalous atmospheric variations in radon related to earthquakes have been observed in hourly exhaust-monitoring data from radioisotope institutes in Japan. The extraction of seismic anomalous radon variations would be greatly aided by understanding the normal pattern of variation in radon concentrations. Using atmospheric daily minimum radon concentration data from five sampling sites, we show that a sinusoidal regression curve can be fitted to the data. In addition, we identify areas where the atmospheric radon variation is significantly affected by the variation in atmospheric turbulence and the onshore-offshore pattern of Asian monsoons. Furthermore, by comparing the sinusoidal regression curve for the normal annual (seasonal) variations at the five sites to the sinusoidal regression curve for a previously published dataset of radon values at the five Japanese prefectures, we can estimate the normal annual variation pattern. By fitting sinusoidal regression curves to the previously published dataset containing sites in all Japanese prefectures, we find that 72% of the Japanese prefectures satisfy the requirements of the sinusoidal regression curve pattern. Using the normal annual variation pattern of atmospheric daily minimum radon concentration data, these prefectures are suitable areas for obtaining anomalous radon variations related to earthquakes. Copyright © 2015 Elsevier Ltd. All rights reserved.
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
NASA Astrophysics Data System (ADS)
Zamora-Reyes, D.; Hirschboeck, K. K.; Paretti, N. V.
2012-12-01
Bulletin 17B (B17B) has prevailed for 30 years as the standard manual for determining flood frequency in the United States. Recently proposed updates to B17B include revising the issue of flood heterogeneity, and improving flood estimates by using the Expected Moments Algorithm (EMA) which can better address low outliers and accommodate information on historical peaks. Incorporating information on mixed populations, such as flood-causing mechanisms, into flood estimates for regions that have noticeable flood heterogeneity can be statistically challenging when systematic flood records are short. The problem magnifies when the population sample size is reduced by decomposing the record, especially if multiple flood mechanisms are involved. In B17B, the guidelines for dealing with mixed populations focus primarily on how to rule out any need to perform a mixed-population analysis. However, in some regions mixed flood populations are critically important determinants of regional flood frequency variations and should be explored from this perspective. Arizona is an area with a heterogeneous mixture of flood processes due to: warm season convective thunderstorms, cool season synoptic-scale storms, and tropical cyclone-enhanced convective activity occurring in the late summer or early fall. USGS station data throughout Arizona was compiled into a database and each flood peak (annual and partial duration series) was classified according to its meteorological cause. Using these data, we have explored the role of flood heterogeneity in Arizona flood estimates through composite flood frequency analysis based on mixed flood populations using EMA. First, for selected stations, the three flood-causing populations were separated out from the systematic annual flood series record and analyzed individually. Second, to create composite probability curves, the individual curves for each of the three populations were generated and combined using Crippen's (1978) composite probability equations for sites that have two or more independent flood populations. Finally, the individual probability curves generated for each of the three flood-causing populations were compared with both the site's composite probability curve and the standard B17B curve to explore the influence of heterogeneity using the 100-year and 200-year flood estimates as a basis of comparison. Results showed that sites located in southern Arizona and along the abrupt elevation transition zone of the Mogollon Rim exhibit a better fit to the systematic data using their composite probability curves than the curves derived from standard B17B analysis. Synoptic storm floods and tropical cyclone-enhanced floods had the greatest influence on 100-year and 200-year flood estimates. This was especially true in southern Arizona, even though summer convective floods are much more frequent and therefore dominate the composite curve. Using the EMA approach also influenced our results because all possible low outliers were censored by the built-in Multiple Grubbs-Beck Test, providing a better fit to the systematic data in the upper probabilities. In conclusion, flood heterogeneity can play an important role in regional flood frequency variations in Arizona and that understanding its influence is important when making projections about future flood variations.
Kim, Eon; Ehrmann, Klaus; Uhlhorn, Stephen; Borja, David; Arrieta-Quintero, Esdras; Parel, Jean-Marie
2011-01-01
Presbyopia is an age related, gradual loss of accommodation, mainly due to changes in the crystalline lens. As part of research efforts to understand and cure this condition, ex vivo, cross-sectional optical coherence tomography images of crystalline lenses were obtained by using the Ex-Vivo Accommodation Simulator (EVAS II) instrument and analyzed to extract their physical and optical properties. Various filters and edge detection methods were applied to isolate the edge contour. An ellipse is fitted to the lens outline to obtain central reference point for transforming the pixel data into the analysis coordinate system. This allows for the fitting of a high order equation to obtain a mathematical description of the edge contour, which obeys constraints of continuity as well as zero to infinite surface slopes from apex to equator. Geometrical parameters of the lens were determined for the lens images captured at different accommodative states. Various curve fitting functions were developed to mathematically describe the anterior and posterior surfaces of the lens. Their differences were evaluated and their suitability for extracting optical performance of the lens was assessed. The robustness of these algorithms was tested by analyzing the same images repeated times. PMID:21639571
Kim, Eon; Ehrmann, Klaus; Uhlhorn, Stephen; Borja, David; Arrieta-Quintero, Esdras; Parel, Jean-Marie
2011-05-01
Presbyopia is an age related, gradual loss of accommodation, mainly due to changes in the crystalline lens. As part of research efforts to understand and cure this condition, ex vivo, cross-sectional optical coherence tomography images of crystalline lenses were obtained by using the Ex-Vivo Accommodation Simulator (EVAS II) instrument and analyzed to extract their physical and optical properties. Various filters and edge detection methods were applied to isolate the edge contour. An ellipse is fitted to the lens outline to obtain central reference point for transforming the pixel data into the analysis coordinate system. This allows for the fitting of a high order equation to obtain a mathematical description of the edge contour, which obeys constraints of continuity as well as zero to infinite surface slopes from apex to equator. Geometrical parameters of the lens were determined for the lens images captured at different accommodative states. Various curve fitting functions were developed to mathematically describe the anterior and posterior surfaces of the lens. Their differences were evaluated and their suitability for extracting optical performance of the lens was assessed. The robustness of these algorithms was tested by analyzing the same images repeated times.
Modal vector estimation for closely spaced frequency modes
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.; Blair, M.
1982-01-01
Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.
Point and path performance of light aircraft: A review and analysis
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summey, D. C.; Johnson, W. D.
1973-01-01
The literature on methods for predicting the performance of light aircraft is reviewed. The methods discussed in the review extend from the classical instantaneous maximum or minimum technique to techniques for generating mathematically optimum flight paths. Classical point performance techniques are shown to be adequate in many cases but their accuracies are compromised by the need to use simple lift, drag, and thrust relations in order to get closed form solutions. Also the investigation of the effect of changes in weight, altitude, configuration, etc. involves many essentially repetitive calculations. Accordingly, computer programs are provided which can fit arbitrary drag polars and power curves with very high precision and which can then use the resulting fits to compute the performance under the assumption that the aircraft is not accelerating.
Malik, V.; Goodwill, J.; Mallapragada, S.; ...
2014-11-13
The rate of heating of a water-based colloid of uniformly sized 15 nm magnetic nanoparticles by high-amplitude and high-frequency ac magnetic field induced by the resonating LC circuit (nanoTherics Magnetherm) was measured. The results are analyzed in terms of specific energy absorption rate (SAR). Fitting field amplitude and frequency dependences of SAR to the linear response theory, magnetic moment per particles was extracted. The value of magnetic moment was independently evaluated from dc magnetization measurements (Quantum Design MPMS) of a frozen colloid by fitting field-dependent magnetization to Langevin function. The two methods produced similar results, which are compared to themore » theoretical expectation for this particle size. Additionally, analysis of SAR curves yielded effective relaxation time.« less
Atmospheric particulate analysis using angular light scattering
NASA Technical Reports Server (NTRS)
Hansen, M. Z.
1980-01-01
Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.
MEM spectral analysis for predicting influenza epidemics in Japan.
Sumi, Ayako; Kamo, Ken-ichi
2012-03-01
The prediction of influenza epidemics has long been the focus of attention in epidemiology and mathematical biology. In this study, we tested whether time series analysis was useful for predicting the incidence of influenza in Japan. The method of time series analysis we used consists of spectral analysis based on the maximum entropy method (MEM) in the frequency domain and the nonlinear least squares method in the time domain. Using this time series analysis, we analyzed the incidence data of influenza in Japan from January 1948 to December 1998; these data are unique in that they covered the periods of pandemics in Japan in 1957, 1968, and 1977. On the basis of the MEM spectral analysis, we identified the periodic modes explaining the underlying variations of the incidence data. The optimum least squares fitting (LSF) curve calculated with the periodic modes reproduced the underlying variation of the incidence data. An extension of the LSF curve could be used to predict the incidence of influenza quantitatively. Our study suggested that MEM spectral analysis would allow us to model temporal variations of influenza epidemics with multiple periodic modes much more effectively than by using the method of conventional time series analysis, which has been used previously to investigate the behavior of temporal variations in influenza data.
High pressure melting curve of platinum up to 35 GPa
NASA Astrophysics Data System (ADS)
Patel, Nishant N.; Sunder, Meenakshi
2018-04-01
Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.
[Keratoconus special soft contact lens fitting].
Yamazaki, Ester Sakae; da Silva, Vanessa Cristina Batista; Morimitsu, Vagner; Sobrinho, Marcelo; Fukushima, Nelson; Lipener, César
2006-01-01
To evaluate the fitting and use of a soft contact lens in keratoconic patients. Retrospective study on 80 eyes of 66 patients, fitted with a special soft contact lens for keratoconus, at the Contact Lens Section of UNIFESP and private clinics. Keratoconus was classified according to degrees of disease severity by keratometric pattern. Age, gender, diagnosis, keratometry, visual acuity, spherical equivalent (SE), base curve and clinical indication were recorded. Of 66 patients (80 eyes) with keratoconus the mean age was 29 years, 51.5% were men and 48.5% women. According to the groups: 15.0% were incipient, 53.7% moderate, 26.3% advanced and 5.0% were severe. The majority of the eyes of patients using contact lenses (91.25%) achieved visual acuity better than 20/40. To 88 eyes 58% were tihed with lens with spherical power (mean -5.45 diopters) and 41% with spherocylinder power (from -0.5 to -5.00 cylindrical diopters). The most frequent base curve was 7.6 in 61% of the eyes. The main reasons for this special lens fitting were due to reduced tolerance and poor fitting pattern achieved with other lenses. The special soft contact lens is useful in fitting difficult keratoconic patients by offering comfort and improving visual rehabilitation that may allow more patients to postpone the need for corneal transplant.
Why "suboptimal" is optimal: Jensen's inequality and ectotherm thermal preferences.
Martin, Tara Laine; Huey, Raymond B
2008-03-01
Body temperature (T(b)) profoundly affects the fitness of ectotherms. Many ectotherms use behavior to control T(b) within narrow levels. These temperatures are assumed to be optimal and therefore to match body temperatures (Trmax) that maximize fitness (r). We develop an optimality model and find that optimal body temperature (T(o)) should not be centered at Trmax but shifted to a lower temperature. This finding seems paradoxical but results from two considerations relating to Jensen's inequality, which deals with how variance and skew influence integrals of nonlinear functions. First, ectotherms are not perfect thermoregulators and so experience a range of T(b). Second, temperature-fitness curves are asymmetric, such that a T(b) higher than Trmax depresses fitness more than will a T(b) displaced an equivalent amount below Trmax. Our model makes several predictions. The magnitude of the optimal shift (Trmax - To) should increase with the degree of asymmetry of temperature-fitness curves and with T(b) variance. Deviations should be relatively large for thermal specialists but insensitive to whether fitness increases with Trmax ("hotter is better"). Asymmetric (left-skewed) T(b) distributions reduce the magnitude of the optimal shift but do not eliminate it. Comparative data (insects, lizards) support key predictions. Thus, "suboptimal" is optimal.
NSVS 7051868: A system in a key evolutionary stage. First multi-color photometric study
NASA Astrophysics Data System (ADS)
Barani, C.; Martignoni, M.; Acerbi, F.
2017-01-01
The first CCD photometric complete light curves of the eclipsing binary NSVS 7051868 were obtained during six nights in January 2016 in the B, V and Ic bands using the 0.25 m telescope of the Stazione Astronomica Betelgeuse in Magnago, Italy. These observations confirm the short period (P = 0.517 days) variation found by Shaw and collaborators in their online list (http://www.physast.uga.edu/ jss/nsvs/) of periodic variable stars found in the Northern Sky Variability Survey. The light curves were modelled using the Wilson-Devinney code and the elements obtained from this analysis are used to compute the physical parameters of the system in order to study its evolutionary status. A grid of solutions for several fixed values of mass ratio was calculated. A reasonable fit of the synthetic light curves of the data indicate that NSVS 7051868 is an A-subtype W Ursae Majoris contact binary system, with a low mass ratio of q = 0.22, a degree of contact factor f = 35.5% and inclination i = 85°. Our light curves shows a time of constant light in the secondary eclipse of approximately 0.1 in phase. The light curve solution reveals a component temperature difference of about 700 K. Both the value of the fill-out factor and the temperature difference suggests that NSVS 7051868 is a system in a key evolutionary stage of the Thermal Relaxation Oscillation theory. The distance to NSVS 7051868 was calculated as 180 pc from this analysis, taking into account interstellar extinction.
The integrated Michaelis-Menten rate equation: déjà vu or vu jàdé?
Goličnik, Marko
2013-08-01
A recent article of Johnson and Goody (Biochemistry, 2011;50:8264-8269) described the almost-100-years-old paper of Michaelis and Menten. Johnson and Goody translated this classic article and presented the historical perspective to one of incipient enzyme-reaction data analysis, including a pioneering global fit of the integrated rate equation in its implicit form to the experimental time-course data. They reanalyzed these data, although only numerical techniques were used to solve the model equations. However, there is also the still little known algebraic rate-integration equation in a closed form that enables direct fitting of the data. Therefore, in this commentary, I briefly present the integral solution of the Michaelis-Menten rate equation, which has been largely overlooked for three decades. This solution is expressed in terms of the Lambert W function, and I demonstrate here its use for global nonlinear regression curve fitting, as carried out with the original time-course dataset of Michaelis and Menten.
Jia, Tao; Gao, Di
2018-04-03
Molecular dynamics simulation is employed to investigate the microscopic heat current inside an argon-copper nanofluid. Wavelet analysis of the microscopic heat current inside the nanofluid system is conducted. The signal of the microscopic heat current is decomposed into two parts: one is the approximation part; the other is the detail part. The approximation part is associated with the low-frequency part of the signal, and the detail part is associated with the high-frequency part of the signal. Both the probability distributions of the high-frequency and the low-frequency parts of the signals demonstrate Gaussian-like characteristics. The curves fit to data of the probability distribution of the microscopic heat current are established, and the parameters including the mean value and the standard deviation in the mathematical formulas of the curves show dramatic changes for the cases before and after adding copper nanoparticles into the argon base fluid.
Ademi, Abdulakim; Grozdanov, Anita; Paunović, Perica; Dimitrov, Aleksandar T
2015-01-01
Summary A model consisting of an equation that includes graphene thickness distribution is used to calculate theoretical 002 X-ray diffraction (XRD) peak intensities. An analysis was performed upon graphene samples produced by two different electrochemical procedures: electrolysis in aqueous electrolyte and electrolysis in molten salts, both using a nonstationary current regime. Herein, the model is enhanced by a partitioning of the corresponding 2θ interval, resulting in significantly improved accuracy of the results. The model curves obtained exhibit excellent fitting to the XRD intensities curves of the studied graphene samples. The employed equation parameters make it possible to calculate the j-layer graphene region coverage of the graphene samples, and hence the number of graphene layers. The results of the thorough analysis are in agreement with the calculated number of graphene layers from Raman spectra C-peak position values and indicate that the graphene samples studied are few-layered. PMID:26665083
TG study of the Li0.4Fe2.4Zn0.2O4 ferrite synthesis
NASA Astrophysics Data System (ADS)
Lysenko, E. N.; Nikolaev, E. V.; Surzhikov, A. P.
2016-02-01
In this paper, the kinetic analysis of Li-Zn ferrite synthesis was studied using thermogravimetry (TG) method through the simultaneous application of non-linear regression to several measurements run at different heating rates (multivariate non-linear regression). Using TG-curves obtained for the four heating rates and Netzsch Thermokinetics software package, the kinetic models with minimal adjustable parameters were selected to quantitatively describe the reaction of Li-Zn ferrite synthesis. It was shown that the experimental TG-curves clearly suggest a two-step process for the ferrite synthesis and therefore a model-fitting kinetic analysis based on multivariate non-linear regressions was conducted. The complex reaction was described by a two-step reaction scheme consisting of sequential reaction steps. It is established that the best results were obtained using the Yander three-dimensional diffusion model at the first stage and Ginstling-Bronstein model at the second step. The kinetic parameters for lithium-zinc ferrite synthesis reaction were found and discussed.
Discrete Gust Model for Launch Vehicle Assessments
NASA Technical Reports Server (NTRS)
Leahy, Frank B.
2008-01-01
Analysis of spacecraft vehicle responses to atmospheric wind gusts during flight is important in the establishment of vehicle design structural requirements and operational capability. Typically, wind gust models can be either a spectral type determined by a random process having a wide range of wavelengths, or a discrete type having a single gust of predetermined magnitude and shape. Classical discrete models used by NASA during the Apollo and Space Shuttle Programs included a 9 m/sec quasi-square-wave gust with variable wavelength from 60 to 300 m. A later study derived discrete gust from a military specification (MIL-SPEC) document that used a "1-cosine" shape. The MIL-SPEC document contains a curve of non-dimensional gust magnitude as a function of non-dimensional gust half-wavelength based on the Dryden spectral model, but fails to list the equation necessary to reproduce the curve. Therefore, previous studies could only estimate a value of gust magnitude from the curve, or attempt to fit a function to it. This paper presents the development of the MIL-SPEC curve, and provides the necessary information to calculate discrete gust magnitudes as a function of both gust half-wavelength and the desired probability level of exceeding a specified gust magnitude.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
The light curve of SN 1987A revisited: constraining production masses of radioactive nuclides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitenzahl, Ivo R.; Timmes, F. X.; Magkotsios, Georgios, E-mail: ivo.seitenzahl@anu.edu.au
2014-09-01
We revisit the evidence for the contribution of the long-lived radioactive nuclides {sup 44}Ti, {sup 55}Fe, {sup 56}Co, {sup 57}Co, and {sup 60}Co to the UVOIR light curve of SN 1987A. We show that the V-band luminosity constitutes a roughly constant fraction of the bolometric luminosity between 900 and 1900 days, and we obtain an approximate bolometric light curve out to 4334 days by scaling the late time V-band data by a constant factor where no bolometric light curve data is available. Considering the five most relevant decay chains starting at {sup 44}Ti, {sup 55}Co, {sup 56}Ni, {sup 57}Ni, andmore » {sup 60}Co, we perform a least squares fit to the constructed composite bolometric light curve. For the nickel isotopes, we obtain best fit values of M({sup 56}Ni) = (7.1 ± 0.3) × 10{sup –2} M {sub ☉} and M({sup 57}Ni) = (4.1 ± 1.8) × 10{sup –3} M {sub ☉}. Our best fit {sup 44}Ti mass is M({sup 44}Ti) = (0.55 ± 0.17) × 10{sup –4} M {sub ☉}, which is in disagreement with the much higher (3.1 ± 0.8) × 10{sup –4} M {sub ☉} recently derived from INTEGRAL observations. The associated uncertainties far exceed the best fit values for {sup 55}Co and {sup 60}Co and, as a result, we only give upper limits on the production masses of M({sup 55}Co) < 7.2 × 10{sup –3} M {sub ☉} and M({sup 60}Co) < 1.7 × 10{sup –4} M {sub ☉}. Furthermore, we find that the leptonic channels in the decay of {sup 57}Co (internal conversion and Auger electrons) are a significant contribution and constitute up to 15.5% of the total luminosity. Consideration of the kinetic energy of these electrons is essential in lowering our best fit nickel isotope production ratio to [{sup 57}Ni/{sup 56}Ni] = 2.5 ± 1.1, which is still somewhat high but is in agreement with gamma-ray observations and model predictions.« less
NASA Astrophysics Data System (ADS)
Conley, A.; Goldhaber, G.; Wang, L.; Aldering, G.; Amanullah, R.; Commins, E. D.; Fadeyev, V.; Folatelli, G.; Garavini, G.; Gibbons, R.; Goobar, A.; Groom, D. E.; Hook, I.; Howell, D. A.; Kim, A. G.; Knop, R. A.; Kowalski, M.; Kuznetsova, N.; Lidman, C.; Nobili, S.; Nugent, P. E.; Pain, R.; Perlmutter, S.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Strovink, M.; Thomas, R. C.; Wood-Vasey, W. M.; Supernova Cosmology Project
2006-06-01
We present measurements of Ωm and ΩΛ from a blind analysis of 21 high-redshift supernovae using a new technique (CMAGIC) for fitting the multicolor light curves of Type Ia supernovae, first introduced by Wang and coworkers. CMAGIC takes advantage of the remarkably simple behavior of Type Ia supernovae on color-magnitude diagrams and has several advantages over current techniques based on maximum magnitudes. Among these are a reduced sensitivity to host galaxy dust extinction, a shallower luminosity-width relation, and the relative simplicity of the fitting procedure. This allows us to provide a cross-check of previous supernova cosmology results, despite the fact that current data sets were not observed in a manner optimized for CMAGIC. We describe the details of our novel blindness procedure, which is designed to prevent experimenter bias. The data are broadly consistent with the picture of an accelerating universe and agree with a flat universe within 1.7 σ, including systematics. We also compare the CMAGIC results directly with those of a maximum magnitude fit to the same supernovae, finding that CMAGIC favors more acceleration at the 1.6 σ level, including systematics and the correlation between the two measurements. A fit for w assuming a flat universe yields a value that is consistent with a cosmological constant within 1.2 σ.
NASA Technical Reports Server (NTRS)
Eggleston, John M; Mathews, Charles W
1954-01-01
In the process of analyzing the longitudinal frequency-response characteristics of aircraft, information on some of the methods of analysis has been obtained by the Langley Aeronautical Laboratory of the National Advisory Committee for Aeronautics. In the investigation of these methods, the practical applications and limitations were stressed. In general, the methods considered may be classed as: (1) analysis of sinusoidal response, (2) analysis of transient response as to harmonic content through determination of the Fourier integral by manual or machine methods, and (3) analysis of the transient through the use of least-squares solutions of the coefficients of an assumed equation for either the transient time response or frequency response (sometimes referred to as curve-fitting methods). (author)
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
NASA Astrophysics Data System (ADS)
Szalai, Robert; Ehrhardt, David; Haller, George
2017-06-01
In a nonlinear oscillatory system, spectral submanifolds (SSMs) are the smoothest invariant manifolds tangent to linear modal subspaces of an equilibrium. Amplitude-frequency plots of the dynamics on SSMs provide the classic backbone curves sought in experimental nonlinear model identification. We develop here, a methodology to compute analytically both the shape of SSMs and their corresponding backbone curves from a data-assimilating model fitted to experimental vibration signals. This model identification utilizes Taken's delay-embedding theorem, as well as a least square fit to the Taylor expansion of the sampling map associated with that embedding. The SSMs are then constructed for the sampling map using the parametrization method for invariant manifolds, which assumes that the manifold is an embedding of, rather than a graph over, a spectral subspace. Using examples of both synthetic and real experimental data, we demonstrate that this approach reproduces backbone curves with high accuracy.
Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed
NASA Astrophysics Data System (ADS)
Kumar, V.; Sen, S.
2016-12-01
Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.
NASA Astrophysics Data System (ADS)
Suhaila, Jamaludin; Jemain, Abdul Aziz; Hamdan, Muhammad Fauzee; Wan Zin, Wan Zawiah
2011-12-01
SummaryNormally, rainfall data is collected on a daily, monthly or annual basis in the form of discrete observations. The aim of this study is to convert these rainfall values into a smooth curve or function which could be used to represent the continuous rainfall process at each region via a technique known as functional data analysis. Since rainfall data shows a periodic pattern in each region, the Fourier basis is introduced to capture these variations. Eleven basis functions with five harmonics are used to describe the unimodal rainfall pattern for stations in the East while five basis functions which represent two harmonics are needed to describe the rainfall pattern in the West. Based on the fitted smooth curve, the wet and dry periods as well as the maximum and minimum rainfall values could be determined. Different rainfall patterns are observed among the studied regions based on the smooth curve. Using the functional analysis of variance, the test results indicated that there exist significant differences in the functional means between each region. The largest differences in the functional means are found between the East and Northwest regions and these differences may probably be due to the effect of topography and, geographical location and are mostly influenced by the monsoons. Therefore, the same inputs or approaches might not be useful in modeling the hydrological process for different regions.
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
Educating about Sustainability while Enhancing Calculus
ERIC Educational Resources Information Center
Pfaff, Thomas J.
2011-01-01
We give an overview of why it is important to include sustainability in mathematics classes and provide specific examples of how to do this for a calculus class. We illustrate that when students use "Excel" to fit curves to real data, fundamentally important questions about sustainability become calculus questions about those curves. (Contains 5…
On the mass of the compact object in the black hole binary A0620-00
NASA Technical Reports Server (NTRS)
Haswell, Carole A.; Robinson, Edward L.; Horne, Keith; Stiening, Rae F.; Abbott, Timothy M. C.
1993-01-01
Multicolor orbital light curves of the black hole candidate binary A0620-00 are presented. The light curves exhibit ellipsoidal variations and a grazing eclipse of the mass donor companion star by the accretion disk. Synthetic light curves were generated using realistic mass donor star fluxes and an isothermal blackbody disk. For mass ratios of q = M sub 1/M sub 2 = 5.0, 10.6, and 15.0 systematic searches were executed in parameter space for synthetic light curves that fit the observations. For each mass ratio, acceptable fits were found only for a small range of orbital inclinations. It is argued that the mass ratio is unlikely to exceed q = 10.6, and an upper limit of 0.8 solar masses is placed on the mass of the companion star. These constraints imply 4.16 +/- 0.1 to 5.55 +/- 0.15 solar masses. The lower limit on M sub 1 is more than 4-sigma above the mass of a maximally rotating neutron star, and constitutes further strong evidence in favor of a black hole primary in this system.
Comparison of three methods for wind turbine capacity factor estimation.
Ditkovich, Y; Kuperman, A
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.
Rice, Simon M; Ogrodniczuk, John S; Kealy, David; Seidler, Zac E; Dhillon, Haryana M; Oliffe, John L
2017-12-22
Clinical practice and literature has supported the existence of a phenotypic sub-type of depression in men. While a number of self-report rating scales have been developed in order to empirically test the male depression construct, psychometric validation of these scales is limited. To confirm the psychometric properties of the multidimensional Male Depression Risk Scale (MDRS-22) and to develop clinical cut-off scores for the MDRS-22. Data were obtained from an online sample of 1000 Canadian men (median age (M) = 49.63, standard deviation (SD) = 14.60). Confirmatory factor analysis (CFA) was used to replicate the established six-factor model of the MDRS-22. Psychometric values of the MDRS subscales were comparable to the widely used Patient Health Questionnaire-9. CFA model fit indices indicated adequate model fit for the six-factor MDRS-22 model. ROC curve analysis indicated the MDRS-22 was effective for identifying those with a recent (previous four-weeks) suicide attempt (area under curve (AUC) values = 0.837). The MDRS-22 cut-off identified proportionally more (84.62%) cases of recent suicide attempt relative to the PHQ-9 moderate range (53.85%). The MDRS-22 is the first male-sensitive depression scale to be psychometrically validated using CFA techniques in independent and cross-nation samples. Additional studies should identify differential item functioning and evaluate cross-cultural effects.
An analysis of the massless planet approximation in transit light curve models
NASA Astrophysics Data System (ADS)
Millholland, Sarah; Ruch, Gerry
2015-08-01
Many extrasolar planet transit light curve models use the approximation of a massless planet. They approximate the planet as orbiting elliptically with the host star at the orbit’s focus instead of depicting the planet and star as both orbiting around a common center of mass. This approximation should generally be very good because the transit is a small fraction of the full-phase curve and the planet to stellar mass ratio is typically very small. However, to fully examine the legitimacy of this approximation, it is useful to perform a robust, all-parameter space-encompassing statistical comparison between the massless planet model and the more accurate model.Towards this goal, we establish two questions: (1) In what parameter domain is the approximation invalid? (2) If characterizing an exoplanetary system in this domain, what is the error of the parameter estimates when using the simplified model? We first address question (1). Given each parameter vector in a finite space, we can generate the simplified and more complete model curves. Associated with these model curves is a measure of the deviation between them, such as the root mean square (RMS). We use Gibbs sampling to generate a sample that is distributed according to the RMS surface. The high-density regions in the sample correspond to a large deviation between the models. To determine the domains of these high-density areas, we first employ the Ordering Points to Identify the Clustering Structure (OPTICS) algorithm. We then characterize the subclusters by performing the Patient Rule Induction Method (PRIM) on the transformed Principal Component spaces of each cluster. This process yields descriptors of the parameter domains with large discrepancies between the models.To consider question (2), we start by generating synthetic transit curve observations in the domains specified by the above analysis. We then derive the best-fit parameters of these synthetic light curves according to each model and examine the quality of agreement between the estimated parameters. Taken as a whole, these steps allow for a thorough analysis of the validity of the massless planet approximation.
Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan Timothy; Hackenberg, Robert Errol
These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less
Scaling laws for light-weight optics
NASA Technical Reports Server (NTRS)
Valente, Tina M.
1990-01-01
Scaling laws for light-weight optical systems are examined. A cubic relationship between mirror diameter and weight has been suggested and used by many designers of optical systems as the best description for all light-weight mirrors. A survey of existing light-weight systems in the open literature has been made to clarify this issue. Fifty existing optical systems were surveyed with all varieties of light-weight mirrors including glass and beryllium structured mirrors, contoured mirrors, and very thin solid mirrors. These mirrors were then categorized and weight to diameter ratio was plotted to find a best fit curve for each case. A best fitting curve program tests nineteen different equations and ranks a 'goodness of fit' for each of these equations. The resulting relationship found for each light-weight mirror category helps to quantify light-weight optical systems and methods of fabrication and provides comparisons between mirror types.
NASA Astrophysics Data System (ADS)
Reolon, David; Jacquot, Maxime; Verrier, Isabelle; Brun, Gérald; Veillas, Colette
2006-12-01
In this paper we propose group refractive index measurement with a spectral interferometric set-up using a broadband supercontinuum generated in an air-silica Microstructured Optical Fibre (MOF) pumped with a picosecond pulsed microchip laser. This source authorizes high fringes visibility for dispersion measurements by Spectroscopic Analysis of White Light Interferograms (SAWLI). Phase calculation is assumed by a wavelet transform procedure combined with a curve fit of the recorded channelled spectrum intensity. This approach provides high resolution and absolute group refractive index measurements along one line of the sample by recording a single 2D spectral interferogram without mechanical scanning.
A Method for the Interpretation of Flow Cytometry Data Using Genetic Algorithms.
Angeletti, Cesar
2018-01-01
Flow cytometry analysis is the method of choice for the differential diagnosis of hematologic disorders. It is typically performed by a trained hematopathologist through visual examination of bidimensional plots, making the analysis time-consuming and sometimes too subjective. Here, a pilot study applying genetic algorithms to flow cytometry data from normal and acute myeloid leukemia subjects is described. Initially, Flow Cytometry Standard files from 316 normal and 43 acute myeloid leukemia subjects were transformed into multidimensional FITS image metafiles. Training was performed through introduction of FITS metafiles from 4 normal and 4 acute myeloid leukemia in the artificial intelligence system. Two mathematical algorithms termed 018330 and 025886 were generated. When tested against a cohort of 312 normal and 39 acute myeloid leukemia subjects, both algorithms combined showed high discriminatory power with a receiver operating characteristic (ROC) curve of 0.912. The present results suggest that machine learning systems hold a great promise in the interpretation of hematological flow cytometry data.
Barnard, M.; Venter, C.; Harding, A. K.
2018-01-01
We performed geometric pulsar light curve modeling using static, retarded vacuum, and offset polar cap (PC) dipole B-fields (the latter is characterized by a parameter ε), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole B-field mimics deviations from the static dipole (which corresponds to ε = 0). In addition to constant-emissivity geometric models, we also considered a slot gap (SG) E-field associated with the offset-PC dipole B-field and found that its inclusion leads to qualitatively different light curves. Solving the particle transport equation shows that the particle energy only becomes large enough to yield significant curvature radiation at large altitudes above the stellar surface, given this relatively low E-field. Therefore, particles do not always attain the radiation-reaction limit. Our overall optimal light curve fit is for the retarded vacuum dipole field and OG model, at an inclination angle α=78−1+1° and observer angle ζ=69−1+2°. For this B-field, the TPC model is statistically disfavored compared to the OG model. For the static dipole field, neither model is significantly preferred. We found that smaller values of ε are favored for the offset-PC dipole field when assuming constant emissivity, and larger ε values favored for variable emissivity, but not significantly so. When multiplying the SG E-field by a factor of 100, we found improved light curve fits, with α and ζ being closer to best fits from independent studies, as well as curvature radiation reaction at lower altitudes. PMID:29681648
Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions.
Joshi, Gaurav V; Duan, Yuanyuan; Della Bona, Alvaro; Hill, Thomas J; St John, Kenneth; Griggs, Jason A
2013-11-01
To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPam(1/2) reaching a plateau at different critical flaw sizes based on loading method. Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.