Curve fitting methods for solar radiation data modeling
NASA Astrophysics Data System (ADS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less
NASA Astrophysics Data System (ADS)
McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.
2017-03-01
Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
[Comparison among various software for LMS growth curve fitting methods].
Han, Lin; Wu, Wenhong; Wei, Qiuxia
2015-03-01
To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.
Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method
NASA Astrophysics Data System (ADS)
Verachtert, R.; Lombaert, G.; Degrande, G.
2018-03-01
This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.
Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M
2013-02-05
This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
Edge detection and mathematic fitting for corneal surface with Matlab software.
Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na
2017-01-01
To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.
On the reduction of occultation light curves. [stellar occultations by planets
NASA Technical Reports Server (NTRS)
Wasserman, L.; Veverka, J.
1973-01-01
The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe givesmore » a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.« less
Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary
2012-01-01
Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021
Study on peak shape fitting method in radon progeny measurement.
Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju
2015-11-01
Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An Apparatus for Sizing Particulate Matter in Solid Rocket Motors.
1984-06-01
accurately measured. A curve for sizing polydispersions was presented which was used by Cramer and Hansen [Refs. 2, 12]. Two phase flow losses are often...Concentration...... 54 18. 5 Micron Polystyrene, Curve Fit .......... ... 55 19. 5 Micron Polystyrene, Two Angle Method ........ .56.... 20. 10 Micron...Polystyrene, Curve Fit .. ........ 57....[57 21. 10 Micron Polystyrene, Two Angle Method .. ....... .58 . . .6_ *22. 20J Mizron P3iystvrene Cu. .Fi
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece
Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.
2008-10-14
Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G
2004-02-01
Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
Motulsky, Harvey J; Brown, Ronald E
2006-01-01
Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949
On the convexity of ROC curves estimated from radiological test results
Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.
2010-01-01
Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2009-01-01
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
A curve fitting method for solving the flutter equation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cooper, J. L.
1972-01-01
A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378
NASA Astrophysics Data System (ADS)
Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu
2018-03-01
In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
Mattucci, Stephen F E; Cronin, Duane S
2015-01-01
Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tao; Li, Cheng; Huang, Can
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Ding, Tao; Li, Cheng; Huang, Can; ...
2017-01-09
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
NASA Astrophysics Data System (ADS)
Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan
2014-09-01
A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.
ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Waveform fitting and geometry analysis for full-waveform lidar feature extraction
NASA Astrophysics Data System (ADS)
Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu
2016-10-01
This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.
Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V
2007-02-01
Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Method and apparatus for air-coupled transducer
NASA Technical Reports Server (NTRS)
Song, Junho (Inventor); Chimenti, Dale E. (Inventor)
2010-01-01
An air-coupled transducer includes a ultrasonic transducer body having a radiation end with a backing fixture at the radiation end. There is a flexible backplate conformingly fit to the backing fixture and a thin membrane (preferably a metallized polymer) conformingly fit to the flexible backplate. In one embodiment, the backing fixture is spherically curved and the flexible backplate is spherically curved. The flexible backplate is preferably patterned with pits or depressions.
Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.
Franco, R; Aran, J M; Canela, E I
1991-01-01
A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914
Tromberg, B.J.; Tsay, T.T.; Berns, M.W.; Svaasand, L.O.; Haskell, R.C.
1995-06-13
Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid. 14 figs.
Tromberg, Bruce J.; Tsay, Tsong T.; Berns, Michael W.; Svaasand, Lara O.; Haskell, Richard C.
1995-01-01
Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
Method for Making Measurements of the Post-Combustion Residence Time in a Gas Turbine Engine
NASA Technical Reports Server (NTRS)
Miles, Jeffrey H (Inventor)
2015-01-01
A system and method of measuring a residence time in a gas-turbine engine is provided, whereby the method includes placing pressure sensors at a combustor entrance and at a turbine exit of the gas-turbine engine and measuring a combustor pressure at the combustor entrance and a turbine exit pressure at the turbine exit. The method further includes computing cross-spectrum functions between a combustor pressure sensor signal from the measured combustor pressure and a turbine exit pressure sensor signal from the measured turbine exit pressure, applying a linear curve fit to the cross-spectrum functions, and computing a post-combustion residence time from the linear curve fit.
Using quasars as standard clocks for measuring cosmological redshift.
Dai, De-Chang; Starkman, Glenn D; Stojkovic, Branislav; Stojkovic, Dejan; Weltman, Amanda
2012-06-08
We report hitherto unnoticed patterns in quasar light curves. We characterize segments of the quasar's light curves with the slopes of the straight lines fit through them. These slopes appear to be directly related to the quasars' redshifts. Alternatively, using only global shifts in time and flux, we are able to find significant overlaps between the light curves of different pairs of quasars by fitting the ratio of their redshifts. We are then able to reliably determine the redshift of one quasar from another. This implies that one can use quasars as standard clocks, as we explicitly demonstrate by constructing two independent methods of finding the redshift of a quasar from its light curve.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark
2014-06-01
The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.
Bayesian Analysis of Longitudinal Data Using Growth Curve Models
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.
2007-01-01
Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
Zhou, Wu
2014-01-01
The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846
Craniofacial Reconstruction Using Rational Cubic Ball Curves
Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan
2015-01-01
This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632
Radial dependence of the dark matter distribution in M33
NASA Astrophysics Data System (ADS)
López Fune, E.; Salucci, P.; Corbelli, E.
2017-06-01
The stellar and gaseous mass distributions, as well as the extended rotation curve, in the nearby galaxy M33 are used to derive the radial distribution of dark matter density in the halo and to test cosmological models of galaxy formation and evolution. Two methods are examined to constrain the dark mass density profiles. The first method deals directly with fitting the rotation curve data in the range of galactocentric distances 0.24 ≤ r ≤ 22.72 kpc. Using the results of collisionless Λ cold dark matter numerical simulations, we confirm that the Navarro-Frenkel-White (NFW) dark matter profile provides a better fit to the rotation curve data than the cored Burkert profile (BRK) profile. The second method relies on the local equation of centrifugal equilibrium and on the rotation curve slope. In the aforementioned range of distances, we fit the observed velocity profile, using a function that has a rational dependence on the radius, and we derive the slope of the rotation curve. Then, we infer the effective matter densities. In the radial range 9.53 ≤ r ≤ 22.72 kpc, the uncertainties induced by the luminous matter (stars and gas) become negligible, because the dark matter density dominates, and we can determine locally the radial distribution of dark matter. With this second method, we tested the NFW and BRK dark matter profiles and we can confirm that both profiles are compatible with the data, even though in this case the cored BRK density profile provides a more reasonable value for the baryonic-to-dark matter ratio.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
NASA Technical Reports Server (NTRS)
Cooper, D. B.; Yalabik, N.
1975-01-01
Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
Modeling two strains of disease via aggregate-level infectivity curves.
Romanescu, Razvan; Deardon, Rob
2016-04-01
Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.
NASA Astrophysics Data System (ADS)
Ji, Zhong-Ye; Zhang, Xiao-Fang
2018-01-01
The mathematical relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam is important in beam quality control theory of the high-energy laser weapon system. In order to obtain this mathematical relation, numerical simulation is used in the research. Firstly, the Zernike representations of typically distorted atmospheric wavefront aberrations caused by the Kolmogoroff turbulence are generated. And then, the corresponding beam quality β factors of the different distorted wavefronts are calculated numerically through fast Fourier transform. Thus, the statistical distribution rule between the beam quality β factors of high-energy laser and the wavefront aberrations of the beam can be established by the calculated results. Finally, curve fitting method is chosen to establish the mathematical fitting relationship of these two parameters. And the result of the curve fitting shows that there is a quadratic curve relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam. And in this paper, 3 fitting curves, in which the wavefront aberrations are consisted of Zernike Polynomials of 20, 36, 60 orders individually, are established to express the relationship between the beam quality β factor and atmospheric wavefront aberrations with different spatial frequency.
Surface fitting three-dimensional bodies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1974-01-01
The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
ERIC Educational Resources Information Center
Sun, Yan; Strobel, Johannes; Newby, Timothy J.
2017-01-01
Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…
NASA Astrophysics Data System (ADS)
Ye, L.; Xu, X.; Luan, D.; Jiang, W.; Kang, Z.
2017-07-01
Crater-detection approaches can be divided into four categories: manual recognition, shape-profile fitting algorithms, machine-learning methods and geological information-based analysis using terrain and spectral data. The mainstream method is Shape-profile fitting algorithms. Many scholars throughout the world use the illumination gradient information to fit standard circles by least square method. Although this method has achieved good results, it is difficult to identify the craters with poor "visibility", complex structure and composition. Moreover, the accuracy of recognition is difficult to be improved due to the multiple solutions and noise interference. Aiming at the problem, we propose a method for the automatic extraction of impact craters based on spectral characteristics of the moon rocks and minerals: 1) Under the condition of sunlight, the impact craters are extracted from MI by condition matching and the positions as well as diameters of the craters are obtained. 2) Regolith is spilled while lunar is impacted and one of the elements of lunar regolith is iron. Therefore, incorrectly extracted impact craters can be removed by judging whether the crater contains "non iron" element. 3) Craters which are extracted correctly, are divided into two types: simple type and complex type according to their diameters. 4) Get the information of titanium and match the titanium distribution of the complex craters with normal distribution curve, then calculate the goodness of fit and set the threshold. The complex craters can be divided into two types: normal distribution curve type of titanium and non normal distribution curve type of titanium. We validated our proposed method with MI acquired by SELENE. Experimental results demonstrate that the proposed method has good performance in the test area.
Białek, Marianna
2015-05-01
Physiotherapy for stabilization of idiopathic scoliosis angle in growing children remains controversial. Notably, little data on effectiveness of physiotherapy in children with Early Onset Idiopathic Scoliosis (EOIS) has been published.The aim of this study was to check results of FITS physiotherapy in a group of children with EOIS.The charts of the patients archived in a prospectively collected database were retrospectively reviewed. The inclusion criteria were:diagnosis of EOIS based on spine radiography, age below 10 years, both girls and boys, Cobb angle between 118 and 308, Risser zero, FITS therapy, no other treatment (bracing), and a follow-up at least 2 years from the initiation of the treatment. The criterion for curve progression were as follows: the Cobb angle increase of 68 or more, for curve stabilization; the Cobb angle was 58 comparing to the initial radiograph,for curve correction; and the Cobb angle decrease of 68 or more at the final follow-up radiograph.There were 41 children with EOIS, 36 girls and 5 boys, mean age 7.71.3 years (range 4 to 9 years) who started FITS therapy. The curve pattern was single thoracic (5 children), single thoracolumbar (22 children) or double thoracic/thoracolumbar (14 children), totally 55 structural curvatures. The minimum follow-up was 2 years after initiation of the FITS treatment, maximum was 16 years, mean 4.8 years). At follow-up the mean age was 12.53.4 years. Out of 41 children, 10 passed pubertal growth spurt at the final follow-up and 31 were still immature and continued FITS therapy. Out of 41 children, 27 improved, 13 were stable, and one progressed. Out of 55 structural curves, 32 improved, 22 were stable and one progressed. For the 55 structural curves, the Cobb angle significantly decreased from 18.085.48 at first assessment to 12.586.38 at last evaluation,p<0.0001, paired t-test. The angle of trunk rotation decreased significantly from 4.782.98 to 3.282.58 at last evaluation, p<0.0001,paired t-test.FITS physiotherapy was effective in preventing curve progression in children with EOIS. Final postpubertal follow-up data is needed.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
Study on creep behavior of Grade 91 heat-resistant steel using theta projection method
NASA Astrophysics Data System (ADS)
Ren, Facai; Tang, Xiaoying
2017-10-01
Creep behavior of Grade 91 heat-resistant steel used for steam cooler was characterized using the theta projection method. Creep tests were conducted at the temperature of 923K under the stress ranging from 100-150MPa. Based on the creep curve results, four theta parameters were established using a nonlinear least square fitting method. Four theta parameters showed a good linearity as a function of stress. The predicted curves coincided well with the experimental data and creep curves were also modeled to the low stress level of 60MPa.
NASA Astrophysics Data System (ADS)
Mattei, G.; Ahluwalia, A.
2018-04-01
We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less
Winterstein, Thomas A.
2002-01-01
Hantush and Theis methods type curves were fitted to the measured drawdown and recovery curves in the observation well. The results of matching the type curves to the measured data indicate that leakage is negligible from the overlying Eau Claire confining unit into the Mt. Simon aquifer. The transmissivity and storage coeffi-cients for the Mt. Simon aquifer, determined by both methods, are 3, 000 ft2/d and 3 x 10-4, respectively. The average hydraulic conductivity, assuming an aquifer thickness of 233 ft, is 10 ft/d.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX
2015-06-15
Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less
1985-05-01
distribution, was evaluation of phase shift through best fit of assumed to be the beam response to the microwave theoretical curves and experimental...vibration sidebands o Acceleration as shown in the lower calculated curve . o High-Temperature Exposure o Thermal Vacuum Two of the curves show actual phase ...conclude that the method to measure the phase noise with spectrum estimation is workable, and it has no principle limitation. From the curve it has been
Runoff potentiality of a watershed through SCS and functional data analysis technique.
Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.
Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique
Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911
Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions
Joshi, Gaurav V.; Duan, Yuanyuan; Bona, Alvaro Della; Hill, Thomas J.; John, Kenneth St.; Griggs, Jason A.
2013-01-01
Objectives To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Materials and Methods Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. Results There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPa·m1/2 reaching a plateau at different critical flaw sizes based on loading method. Significance Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. PMID:24034441
NASA Astrophysics Data System (ADS)
Pang, Liping; Goltz, Mark; Close, Murray
2003-01-01
In this note, we applied the temporal moment solutions of [Das and Kluitenberg, 1996. Soil Sci. Am. J. 60, 1724] for one-dimensional advective-dispersive solute transport with linear equilibrium sorption and first-order degradation for time pulse sources to analyse soil column experimental data. Unlike most other moment solutions, these solutions consider the interplay of degradation and sorption. This permits estimation of a first-order degradation rate constant using the zeroth moment of column breakthrough data, as well as estimation of the retardation factor or sorption distribution coefficient of a degrading solute using the first moment. The method of temporal moment (MOM) formulae was applied to analyse breakthrough data from a laboratory column study of atrazine, hexazinone and rhodamine WT transport in volcanic pumice sand, as well as experimental data from the literature. Transport and degradation parameters obtained using the MOM were compared to parameters obtained by fitting breakthrough data from an advective-dispersive transport model with equilibrium sorption and first-order degradation, using the nonlinear least-square curve-fitting program CXTFIT. The results derived from using the literature data were also compared with estimates reported in the literature using different equilibrium models. The good agreement suggests that the MOM could provide an additional useful means of parameter estimation for transport involving equilibrium sorption and first-order degradation. We found that the MOM fitted breakthrough curves with tailing better than curve fitting. However, the MOM analysis requires complete breakthrough curves and relatively frequent data collection to ensure the accuracy of the moments obtained from the breakthrough data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waszczak, Adam; Chang, Chan-Kao; Cheng, Yu-Chi
We fit 54,296 sparsely sampled asteroid light curves in the Palomar Transient Factory survey to a combined rotation plus phase-function model. Each light curve consists of 20 or more observations acquired in a single opposition. Using 805 asteroids in our sample that have reference periods in the literature, we find that the reliability of our fitted periods is a complicated function of the period, amplitude, apparent magnitude, and other light-curve attributes. Using the 805-asteroid ground-truth sample, we train an automated classifier to estimate (along with manual inspection) the validity of the remaining ∼53,000 fitted periods. By this method we findmore » that 9033 of our light curves (of ∼8300 unique asteroids) have “reliable” periods. Subsequent consideration of asteroids with multiple light-curve fits indicates a 4% contamination in these “reliable” periods. For 3902 light curves with sufficient phase-angle coverage and either a reliable fit period or low amplitude, we examine the distribution of several phase-function parameters, none of which are bimodal though all correlate with the bond albedo and with visible-band colors. Comparing the theoretical maximal spin rate of a fluid body with our amplitude versus spin-rate distribution suggests that, if held together only by self-gravity, most asteroids are in general less dense than ∼2 g cm{sup −3}, while C types have a lower limit of between 1 and 2 g cm{sup −3}. These results are in agreement with previous density estimates. For 5–20 km diameters, S types rotate faster and have lower amplitudes than C types. If both populations share the same angular momentum, this may indicate the two types’ differing ability to deform under rotational stress. Lastly, we compare our absolute magnitudes (and apparent-magnitude residuals) to those of the Minor Planet Center’s nominal (G = 0.15, rotation-neglecting) model; our phase-function plus Fourier-series fitting reduces asteroid photometric rms scatter by a factor of ∼3.« less
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
Parameter setting for peak fitting method in XPS analysis of nitrogen in sewage sludge
NASA Astrophysics Data System (ADS)
Tang, Z. J.; Fang, P.; Huang, J. H.; Zhong, P. Y.
2017-12-01
Thermal decomposition method is regarded as an important route to treat increasing sewage sludge, while the high content of N causes serious nitrogen related problems, then figuring out the existing form and content of nitrogen of sewage sludge become essential. In this study, XPSpeak 4.1 was used to investigate the functional forms of nitrogen in sewage sludge, peak fitting method was adopted and the best-optimized parameters were determined. According to the result, the N1s spectra curve can be resolved into 5 peaks: pyridine-N (398.7±0.4eV), pyrrole-N(400.5±0.3eV), protein-N(400.4eV), ammonium-N(401.1±0.3eV) and nitrogen oxide-N(403.5±0.5eV). Based on the the experimental data obtained from elemental analysis and spectrophotometry method, the optimum parameters of curve fitting method were decided: background type: Tougaard, FWHM 1.2, 50% Lorentzian-Gaussian. XPS methods can be used as a practical tool to analysis the nitrogen functional groups of sewage sludge, which can reflect the real content of nitrogen of different forms.
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
Turbine blade profile design method based on Bezier curves
NASA Astrophysics Data System (ADS)
Alexeev, R. A.; Tishchenko, V. A.; Gribin, V. G.; Gavrilov, I. Yu.
2017-11-01
In this paper, the technique of two-dimensional parametric blade profile design is presented. Bezier curves are used to create the profile geometry. The main feature of the proposed method is an adaptive approach of curve fitting to given geometric conditions. Calculation of the profile shape is produced by multi-dimensional minimization method with a number of restrictions imposed on the blade geometry.The proposed method has been used to describe parametric geometry of known blade profile. Then the baseline geometry was modified by varying some parameters of the blade. The numerical calculation of obtained designs has been carried out. The results of calculations have shown the efficiency of chosen approach.
NASA Technical Reports Server (NTRS)
Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.
1984-01-01
An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
Analysis of Wien filter spectra from Hall thruster plumes.
Huang, Wensheng; Shastry, Rohit
2015-07-01
A method for analyzing the Wien filter spectra obtained from the plumes of Hall thrusters is derived and presented. The new method extends upon prior work by deriving the integration equations for the current and species fractions. Wien filter spectra from the plume of the NASA-300M Hall thruster are analyzed with the presented method and the results are used to examine key trends. The new integration method is found to produce results slightly different from the traditional area-under-the-curve method. The use of different velocity distribution forms when performing curve-fits to the peaks in the spectra is compared. Additional comparison is made with the scenario where the current fractions are assumed to be proportional to the heights of peaks. The comparison suggests that the calculated current fractions are not sensitive to the choice of form as long as both the height and width of the peaks are accounted for. Conversely, forms that only account for the height of the peaks produce inaccurate results. Also presented are the equations for estimating the uncertainty associated with applying curve fits and charge-exchange corrections. These uncertainty equations can be used to plan the geometry of the experimental setup.
On the convexity of ROC curves estimated from radiological test results.
Pesce, Lorenzo L; Metz, Charles E; Berbaum, Kevin S
2010-08-01
Although an ideal observer's receiver operating characteristic (ROC) curve must be convex-ie, its slope must decrease monotonically-published fits to empirical data often display "hooks." Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This article aims to identify the practical implications of nonconvex ROC curves and the conditions that can lead to empirical or fitted ROC curves that are not convex. This article views nonconvex ROC curves from historical, theoretical, and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve does not cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any nonconvex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. In general, ROC curve fits that show hooks should be looked on with suspicion unless other arguments justify their presence. 2010 AUR. Published by Elsevier Inc. All rights reserved.
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
Estimating non-isothermal bacterial growth in foods from isothermal experimental data.
Corradini, M G; Peleg, M
2005-01-01
To develop a mathematical method to estimate non-isothermal microbial growth curves in foods from experiments performed under isothermal conditions and demonstrate the method's applicability with published growth data. Published isothermal growth curves of Pseudomonas spp. in refrigerated fish at 0-8 degrees C and Escherichia coli 1952 in a nutritional broth at 27.6-36 degrees C were fitted with two different three-parameter 'primary models' and the temperature dependence of their parameters was fitted by ad hoc empirical 'secondary models'. These were used to generate non-isothermal growth curves by solving, numerically, a differential equation derived on the premise that the momentary non-isothermal growth rate is the isothermal rate at the momentary temperature, at a time that corresponds to the momentary growth level of the population. The predicted non-isothermal growth curves were in agreement with the reported experimental ones and, as expected, the quality of the predictions did not depend on the 'primary model' chosen for the calculation. A common type of sigmoid growth curve can be adequately described by three-parameter 'primary models'. At least in the two systems examined, these could be used to predict growth patterns under a variety of continuous and discontinuous non-isothermal temperature profiles. The described mathematical method whenever validated experimentally will enable the simulation of the microbial quality of stored and transported foods under a large variety of existing or contemplated commercial temperature histories.
NASA Astrophysics Data System (ADS)
Cai, Gaoshen; Wu, Chuanyu; Gao, Zepu; Lang, Lihui; Alexandrov, Sergei
2018-05-01
An elliptical warm/hot sheet bulging test under different temperatures and pressure rates was carried out to predict Al-alloy sheet forming limit during warm/hot sheet hydroforming. Using relevant formulas of ultimate strain to calculate and dispose experimental data, forming limit curves (FLCS) in tension-tension state of strain (TTSS) area are obtained. Combining with the basic experimental data obtained by uniaxial tensile test under the equivalent condition with bulging test, complete forming limit diagrams (FLDS) of Al-alloy are established. Using a quadratic polynomial curve fitting method, material constants of fitting function are calculated and a prediction model equation for sheet metal forming limit is established, by which the corresponding forming limit curves in TTSS area can be obtained. The bulging test and fitting results indicated that the sheet metal FLCS obtained were very accurate. Also, the model equation can be used to instruct warm/hot sheet bulging test.
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
High pressure melting curve of platinum up to 35 GPa
NASA Astrophysics Data System (ADS)
Patel, Nishant N.; Sunder, Meenakshi
2018-04-01
Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.
NASA Astrophysics Data System (ADS)
Salim, Samir; Boquien, Médéric; Lee, Janice C.
2018-05-01
We study the dust attenuation curves of 230,000 individual galaxies in the local universe, ranging from quiescent to intensely star-forming systems, using GALEX, SDSS, and WISE photometry calibrated on the Herschel ATLAS. We use a new method of constraining SED fits with infrared luminosity (SED+LIR fitting), and parameterized attenuation curves determined with the CIGALE SED-fitting code. Attenuation curve slopes and UV bump strengths are reasonably well constrained independently from one another. We find that {A}λ /{A}V attenuation curves exhibit a very wide range of slopes that are on average as steep as the curve slope of the Small Magellanic Cloud (SMC). The slope is a strong function of optical opacity. Opaque galaxies have shallower curves—in agreement with recent radiative transfer models. The dependence of slopes on the opacity produces an apparent dependence on stellar mass: more massive galaxies have shallower slopes. Attenuation curves exhibit a wide range of UV bump amplitudes, from none to Milky Way (MW)-like, with an average strength one-third that of the MW bump. Notably, local analogs of high-redshift galaxies have an average curve that is somewhat steeper than the SMC curve, with a modest UV bump that can be, to first order, ignored, as its effect on the near-UV magnitude is 0.1 mag. Neither the slopes nor the strengths of the UV bump depend on gas-phase metallicity. Functional forms for attenuation laws are presented for normal star-forming galaxies, high-z analogs, and quiescent galaxies. We release the catalog of associated star formation rates and stellar masses (GALEX–SDSS–WISE Legacy Catalog 2).
An accurate surface topography restoration algorithm for white light interferometry
NASA Astrophysics Data System (ADS)
Yuan, He; Zhang, Xiangchao; Xu, Min
2017-10-01
As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.
Modelling Schumann resonances from ELF measurements using non-linear optimization methods
NASA Astrophysics Data System (ADS)
Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo
2017-04-01
Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.
TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Shi, J; Yang, Y
Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less
Nonlinear Constitutive Modeling of Piezoelectric Ceramics
NASA Astrophysics Data System (ADS)
Xu, Jia; Li, Chao; Wang, Haibo; Zhu, Zhiwen
2017-12-01
Nonlinear constitutive modeling of piezoelectric ceramics is discussed in this paper. Van der Pol item is introduced to explain the simple hysteretic curve. Improved nonlinear difference items are used to interpret the hysteresis phenomena of piezoelectric ceramics. The fitting effect of the model on experimental data is proved by the partial least-square regression method. The results show that this method can describe the real curve well. The results of this paper are helpful to piezoelectric ceramics constitutive modeling.
Enrollment Projection within a Decision-Making Framework.
ERIC Educational Resources Information Center
Armstrong, David F.; Nunley, Charlene Wenckowski
1981-01-01
Two methods used to predict enrollment at Montgomery College in Maryland are compared and evaluated, and the administrative context in which they are used is considered. The two methods involve time series analysis (curve fitting) and indicator techniques (yield from components). (MSE)
Cai, Jing; Li, Shan; Zhang, Haixin; Zhang, Shuoxin; Tyree, Melvin T
2014-01-01
Vulnerability curves (VCs) generally can be fitted to the Weibull equation; however, a growing number of VCs appear to be recalcitrant, that is, deviate from a Weibull but seem to fit dual Weibull curves. We hypothesize that dual Weibull curves in Hippophae rhamnoides L. are due to different vessel diameter classes, inter-vessel hydraulic connections or vessels versus fibre tracheids. We used dye staining techniques, hydraulic measurements and quantitative anatomy measurements to test these hypotheses. The fibres contribute 1.3% of the total stem conductivity, which eliminates the hypothesis that fibre tracheids account for the second Weibull curve. Nevertheless, the staining pattern of vessels and fibre tracheids suggested that fibres might function as a hydraulic bridge between adjacent vessels. We also argue that fibre bridges are safer than vessel-to-vessel pits and put forward the concept as a new paradigm. Hence, we tentatively propose that the first Weibull curve may be accounted by vessels connected to each other directly by pit fields, while the second Weibull curve is associated with vessels that are connected almost exclusively by fibre bridges. Further research is needed to test the concept of fibre bridge safety in species that have recalcitrant or normal Weibull curves. © 2013 John Wiley & Sons Ltd.
Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggieri, Ruggero, E-mail: ruggieri.ruggero@gmail.com; Stavreva, Nadejda; Naccarato, Stefania
2012-08-01
Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as duemore » to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.« less
Efficient High-Pressure State Equations
NASA Technical Reports Server (NTRS)
Harstad, Kenneth G.; Miller, Richard S.; Bellan, Josette
1997-01-01
A method is presented for a relatively accurate, noniterative, computationally efficient calculation of high-pressure fluid-mixture equations of state, especially targeted to gas turbines and rocket engines. Pressures above I bar and temperatures above 100 K are addressed The method is based on curve fitting an effective reference state relative to departure functions formed using the Peng-Robinson cubic state equation Fit parameters for H2, O2, N2, propane, methane, n-heptane, and methanol are given.
NASA Astrophysics Data System (ADS)
Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed
2016-12-01
The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.
van Battum, L J; Huizenga, H
2006-07-01
Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.
Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions.
Joshi, Gaurav V; Duan, Yuanyuan; Della Bona, Alvaro; Hill, Thomas J; St John, Kenneth; Griggs, Jason A
2013-11-01
To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPam(1/2) reaching a plateau at different critical flaw sizes based on loading method. Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik
2017-11-01
To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.
Testing Modified Newtonian Dynamics with Low Surface Brightness Galaxies: Rotation Curve FITS
NASA Astrophysics Data System (ADS)
de Blok, W. J. G.; McGaugh, S. S.
1998-11-01
We present modified Newtonian dynamics (MOND) fits to 15 rotation curves of low surface brightness (LSB) galaxies. Good fits are readily found, although for a few galaxies minor adjustments to the inclination are needed. Reasonable values for the stellar mass-to-light ratios are found, as well as an approximately constant value for the total (gas and stars) mass-to-light ratio. We show that the LSB galaxies investigated here lie on the one, unique Tully-Fisher relation, as predicted by MOND. The scatter on the Tully-Fisher relation can be completely explained by the observed scatter in the total mass-to-light ratio. We address the question of whether MOND can fit any arbitrary rotation curve by constructing a plausible fake model galaxy. While MOND is unable to fit this hypothetical galaxy, a normal dark-halo fit is readily found, showing that dark matter fits are much less selective in producing fits. The good fits to rotation curves of LSB galaxies support MOND, especially because these are galaxies with large mass discrepancies deep in the MOND regime.
Quantifying intervertebral disc mechanics: a new definition of the neutral zone
2011-01-01
Background The neutral zone (NZ) is the range over which a spinal motion segment (SMS) moves with minimal resistance. Clear as this may seem, the various methods to quantify NZ described in the literature depend on rather arbitrary criteria. Here we present a stricter, more objective definition. Methods To mathematically represent load-deflection of a SMS, the asymmetric curve was fitted by a summed sigmoid function. The first derivative of this curve represents the SMS compliance and the region with the highest compliance (minimal stiffness) is the NZ. To determine the boundaries of this region, the inflection points of compliance can be used as unique points. These are defined by the maximum and the minimum in the second derivative of the fitted curve, respectively. The merits of the model were investigated experimentally: eight porcine lumbar SMS's were bent in flexion-extension, before and after seven hours of axial compression. Results The summed sigmoid function provided an excellent fit to the measured data (r2 > 0.976). The NZ by the new definition was on average 2.4 (range 0.82-7.4) times the NZ as determined by the more commonly used angulation difference at zero loading. Interestingly, NZ consistently and significantly decreased after seven hours of axial compression when determined by the new definition. On the other hand, NZ increased when defined as angulation difference, probably reflecting the increase of hysteresis. The methods thus address different aspects of the load-deflection curve. Conclusions A strict mathematical definition of the NZ is proposed, based on the compliance of the SMS. This operational definition is objective, conceptually correct, and does not depend on arbitrarily chosen criteria. PMID:21299900
Three-dimensional simulation of human teeth and its application in dental education and research.
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.
Three-dimensional simulation of human teeth and its application in dental education and research
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836
NASA Astrophysics Data System (ADS)
Lee, Soojin; Cho, Woon Jo; Kim, Yang Do; Kim, Eun Kyu; Park, Jae Gwan
2005-07-01
White-light-emitting Si nanoparticles were prepared from the sodium silicide (NaSi) precursor. The photoluminescence of colloidal Si nanoparticles has been fitted by effective mass approximation (EMA). We analyzed the correlation between experimental photoluminescence and simulated fitting curves. Both the mean diameter and the size dispersion of the white-light-emitting Si nanoparticles were estimated.
Amato, Ernesto; Campennì, Alfredo; Leotta, Salvatore; Ruggeri, Rosaria M; Baldari, Sergio
2016-06-01
Radioiodine therapy is an effective and safe treatment of hyperthyroidism due to Graves' disease, toxic adenoma, toxic multinodular goiter. We compared the outcomes of a traditional calculation method based on an analytical fit of the uptake curve and subsequent dose calculation with the MIRD approach, and an alternative computation approach based on a formulation implemented in a public-access website, searching for the best timing of radioiodine uptake measurements in pre-therapeutic dosimetry. We report about sixty-nine hyperthyroid patients that were treated after performing a pre-therapeutic dosimetry calculated by fitting a six-point uptake curve (3-168h). In order to evaluate the results of the radioiodine treatment, patients were followed up to sixty-four months after treatment (mean 47.4±16.9). Patient dosimetry was then retrospectively recalculated with the two above-mentioned methods. Several time schedules for uptake measurements were considered, with different timings and total number of points. Early time schedules, sampling uptake up to 48h, do not allow to set-up an accurate treatment plan, while schedules including the measurement at one week give significantly better results. The analytical fit procedure applied to the three-point time schedule 3(6)-24-168h gave results significantly more accurate than the website approach exploiting either the same schedule, or the single measurement at 168h. Consequently, the best strategy among the ones considered is to sample the uptake at 3(6)-24-168h, and carry out an analytical fit of the curve, while extra measurements at 48 and 72h lead only marginal improvements in the accuracy of therapeutic activity determination. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
2011-01-01
Background Conservative scoliosis therapy according to the FITS Concept is applied as a unique treatment or in combination with corrective bracing. The aim of the study was to present author's method of diagnosis and therapy for idiopathic scoliosis FITS-Functional Individual Therapy of Scoliosis and to analyze the early results of FITS therapy in a series of consecutive patients. Methods The analysis comprised separately: (1) single structural thoracic, thoracolumbar or lumbar curves and (2) double structural scoliosis-thoracic and thoracolumbar or lumbar curves. The Cobb angle and Risser sign were analyzed at the initial stage and at the 2.8-year follow-up. The percentage of patients improved (defined as decrease of Cobb angle of more than 5 degrees), stable (+/- 5 degrees), and progressed (increase of Cobb angle of more than 5 degrees) was calculated. The clinical assessment comprised: the Angle of Trunk Rotation (ATR) initial and follow-up value, the plumb line imbalance, the scapulae level and the distance from the apical spinous process of the primary curve to the plumb line. Results In the Group A: (1) in single structural scoliosis 50,0% of patients improved, 46,2% were stable and 3,8% progressed, while (2) in double scoliosis 50,0% of patients improved, 30,8% were stable and 19,2% progressed. In the Group B: (1) in single scoliosis 20,0% of patients improved, 80,0% were stable, no patient progressed, while (2) in double scoliosis 28,1% of patients improved, 46,9% were stable and 25,0% progressed. Conclusion Best results were obtained in 10-25 degrees scoliosis which is a good indication to start therapy before more structural changes within the spine establish. PMID:22122964
Mizuno, Ju; Mohri, Satoshi; Yokoyama, Takeshi; Otsuji, Mikiya; Arita, Hideko; Hanaoka, Kazuo
2017-02-01
Varying temperature affects cardiac systolic and diastolic function and the left ventricular (LV) pressure-time curve (PTC) waveform that includes information about LV inotropism and lusitropism. Our proposed half-logistic (h-L) time constants obtained by fitting using h-L functions for four segmental phases (Phases I-IV) in the isovolumic LV PTC are more useful indices for estimating LV inotropism and lusitropism during contraction and relaxation periods than the mono-exponential (m-E) time constants at normal temperature. In this study, we investigated whether the superiority of the goodness of h-L fits remained even at hypothermia and hyperthermia. Phases I-IV in the isovolumic LV PTCs in eight excised, cross-circulated canine hearts at 33, 36, and 38 °C were analyzed using h-L and m-E functions and the least-squares method. The h-L and m-E time constants for Phases I-IV significantly shortened with increasing temperature. Curve fitting using h-L functions was significantly better than that using m-E functions for Phases I-IV at all temperatures. Therefore, the superiority of the goodness of h-L fit vs. m-E fit remained at all temperatures. As LV inotropic and lusitropic indices, temperature-dependent h-L time constants could be more useful than m-E time constants for Phases I-IV.
Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.
Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui
2018-01-13
Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.
Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.; Taylor, Aaron B.
2009-01-01
Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…
Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.
Bandos, Andriy I; Guo, Ben; Gur, David
2017-02-01
The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply unreliable AUC-based inferences. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
NASA Astrophysics Data System (ADS)
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Long-term predictive capability of erosion models
NASA Technical Reports Server (NTRS)
Veerabhadra, P.; Buckley, D. H.
1983-01-01
A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.
NASA Astrophysics Data System (ADS)
Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid
2017-03-01
The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.
Rankl, James G.
1982-01-01
This report describes a method to estimate infiltration rates of soils for use in estimating runoff from small basins. Average rainfall intensity is plotted against storm duration on log-log paper. All rainfall events are designated as having either runoff or nonrunoff. A power-decay-type curve is visually fitted to separate the two types of rainfall events. This separation curve is an incipient-ponding curve and its equation describes infiltration parameters for a soil. For basins with more than one soil complex, only the incipient-ponding curve for the soil complex with the lowest infiltration rate can be defined using the separation technique. Incipient-ponding curves for soils with infiltration rates greater than the lowest curve are defined by ranking the soils according to their relative permeabilities and optimizing the curve position. A comparison of results for six basins produced computed total runoff for all events used ranging from 16.6 percent less to 2.3 percent more than measured total runoff. (USGS)
A Global Optimization Method to Calculate Water Retention Curves
NASA Astrophysics Data System (ADS)
Maggi, S.; Caputo, M. C.; Turturro, A. C.
2013-12-01
Water retention curves (WRC) have a key role for the hydraulic characterization of soils and rocks. The behaviour of the medium is defined by relating the unsaturated water content to the matric potential. The experimental determination of WRCs requires an accurate and detailed measurement of the dependence of matric potential on water content, a time-consuming and error-prone process, in particular for rocky media. A complete experimental WRC needs at least a few tens of data points, distributed more or less uniformly from full saturation to oven dryness. Since each measurement requires to wait to reach steady state conditions (i.e., between a few tens of minutes for soils and up to several hours or days for rocks or clays), the whole process can even take a few months. The experimental data are fitted to the most appropriate parametric model, such as the widely used models of Van Genuchten, Brooks and Corey and Rossi-Nimmo, to obtain the analytic WRC. We present here a new method for the determination of the parameters that best fit the models to the available experimental data. The method is based on differential evolution, an evolutionary computation algorithm particularly useful for multidimensional real-valued global optimization problems. With this method it is possible to strongly reduce the number of measurements necessary to optimize the model parameters that accurately describe the WRC of the samples, allowing to decrease the time needed to adequately characterize the medium. In the present work, we have applied our method to calculate the WRCs of sedimentary carbonatic rocks of marine origin, belonging to 'Calcarenite di Gravina' Formation (Middle Pliocene - Early Pleistocene) and coming from two different quarry districts in Southern Italy. WRC curves calculated using the Van Genuchten model by simulated annealing (dashed curve) and differential evolution (solid curve). The curves are calculated using 10 experimental data points randomly extracted from the full experimental dataset. Simulated annealing is not able to find the optimal solution with this reduced data set.
Investigating Convergence Patterns for Numerical Methods Using Data Analysis
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2013-01-01
The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…
NASA Astrophysics Data System (ADS)
Yao, X. F.; Xiong, T. C.; Xu, H. M.; Wan, J. P.; Long, G. R.
2008-11-01
The residual stresses of the PMMA (polymethyl methacrylate) specimens after being drilled, reamed and polished respectively are investigated using the digital speckle correlation experimental method,. According to the displacement fields around the correlated calculated region, the polynomial curve fitting method is used to obtain the continuous displacement fields, and the strain fields can be obtained from the derivative of the displacement fields. Considering the constitutive equation of the material, the expression of the residual stress can be presented. During the data processing, according to the fitting effect of the data, the calculation region of the correlated speckles and the degree of the polynomial fitting curve is decided. These results show that the maximum stress is at the hole-wall of the drilling hole specimen and with the increasing of the diameter of the drilled hole, the residual stress resulting from the hole drilling increases, whereas the process of reaming and polishing hole can reduce the residual stress. The relative large discrete degree of the residual stress is due to the chip removal ability of the drill bit, the cutting feed of the drill and other various reasons.
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
A Century of Enzyme Kinetic Analysis, 1913 to 2013
Johnson, Kenneth A.
2013-01-01
This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. PMID:23850893
NASA Astrophysics Data System (ADS)
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
NASA Astrophysics Data System (ADS)
Szalai, Robert; Ehrhardt, David; Haller, George
2017-06-01
In a nonlinear oscillatory system, spectral submanifolds (SSMs) are the smoothest invariant manifolds tangent to linear modal subspaces of an equilibrium. Amplitude-frequency plots of the dynamics on SSMs provide the classic backbone curves sought in experimental nonlinear model identification. We develop here, a methodology to compute analytically both the shape of SSMs and their corresponding backbone curves from a data-assimilating model fitted to experimental vibration signals. This model identification utilizes Taken's delay-embedding theorem, as well as a least square fit to the Taylor expansion of the sampling map associated with that embedding. The SSMs are then constructed for the sampling map using the parametrization method for invariant manifolds, which assumes that the manifold is an embedding of, rather than a graph over, a spectral subspace. Using examples of both synthetic and real experimental data, we demonstrate that this approach reproduces backbone curves with high accuracy.
NASA Astrophysics Data System (ADS)
Metzger, Robert; Riper, Kenneth Van; Lasche, George
2017-09-01
A new method for analysis of uranium and radium in soils by gamma spectroscopy has been developed using VRF ("Visual RobFit") which, unlike traditional peak-search techniques, fits full-spectrum nuclide shapes with non-linear least-squares minimization of the chi-squared statistic. Gamma efficiency curves were developed for a 500 mL Marinelli beaker geometry as a function of soil density using MCNP. Collected spectra were then analyzed using the MCNP-generated efficiency curves and VRF to deconvolute the 90 keV peak complex of uranium and obtain 238U and 235U activities. 226Ra activity was determined either from the radon daughters if the equilibrium status is known, or directly from the deconvoluted 186 keV line. 228Ra values were determined from the 228Ac daughter activity. The method was validated by analysis of radium, thorium and uranium soil standards and by inter-comparison with other methods for radium in soils. The method allows for a rapid determination of whether a sample has been impacted by a man-made activity by comparison of the uranium and radium concentrations to those that would be expected from a natural equilibrium state.
NASA Astrophysics Data System (ADS)
Fu, W.; Gu, L.; Hoffman, F. M.
2013-12-01
The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance takes into account the influence of CO2 from mitochondria, comparing to the commonly used constant value of mesophyll conductance. We show that after considering this effect the other parameters of the photosynthesis model can be re-estimated. Our results indicate that variable mesophyll conductance has most effect on the estimation of the parameter of the maximum electron transport rate (Jmax), but has a negligible impact on the estimated day respiration (Rd) and photocompensation point (<2%).
[Experimental study and correction of the absorption and enhancement effect between Ti, V and Fe].
Tuo, Xian-Guo; Mu, Ke-Liang; Li, Zhe; Wang, Hong-Hui; Luo, Hui; Yang, Jian-Bo
2009-11-01
The absorption and enhancement effects in X-ray fluorescence analysis for Ti, V and Fe elements were studied in the present paper. Three bogus duality systems of Ti-V/Ti-Fe/V-Fe samples were confected and measured by X-ray fluorescence analysis technique using HPGe semiconductor detector, and the relation curve between unitary coefficient (R(K)) of element count rate and element content (W(K)) were obtained after the experiment. Having analyzed the degree of absorption and enhancement effect between every two elements, the authors get the result, and that is the absorption and enhancement effect between Ti and V is relatively distinctness, while it's not so distinctness in Ti-Fe and V-Fe. After that, a mathematics correction method of exponential fitting was used to fit the R(K)-W(K) curve and get a function equation of X-ray fluorescence count rate and content. Three groups of Ti-V duality samples were used to test the fitting method and the relative errors of Ti and V were less than 0.2% as compared to the actual results.
NASA Astrophysics Data System (ADS)
Kazakis, Nikolaos A.
2018-01-01
The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.
Theoretical studies of solar-pumped lasers
NASA Astrophysics Data System (ADS)
Harries, Wynford L.
1988-06-01
The investigation of stimulated emission causing transitions from the B(1) pi sub u state of sodium to the overlapping 2(1) sigma(+) sub g electronic state has been continued. A new method of estimating the Franck-Condon factors has been developed which instead of fitting the molecular potential curves with Morse functions, estimates the V(r) dependence by interpolation from given potential curves. The results for the sum of the rates from one vibrational level in the upper state to all the levels in the lower state show good agreement with the previous method, implying that curve crossing by stimulated emission due to photons from the oven is an important mechanism in sodium.
Molecular dynamics study of the melting curve of NiTi alloy under pressure
NASA Astrophysics Data System (ADS)
Zeng, Zhao-Yi; Hu, Cui-E.; Cai, Ling-Cang; Chen, Xiang-Rong; Jing, Fu-Qian
2011-02-01
The melting curve of NiTi alloy was predicted by using molecular dynamics simulations combining with the embedded atom model potential. The calculated thermal equation of state consists well with our previous results obtained from quasiharmonic Debye approximation. Fitting the well-known Simon form to our Tm data yields the melting curves for NiTi: 1850(1 + P/21.938)0.328 (for one-phase method) and 1575(1 + P/7.476)0.305 (for two-phase method). The two-phase simulations can effectively eliminate the superheating in one-phase simulations. At 1 bar, the melting temperature of NiTi is 1575 ± 25 K and the corresponding melting slope is 64 K/GPa.
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
Revisiting the Estimation of Dinosaur Growth Rates
Myhrvold, Nathan P.
2013-01-01
Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less
Scaling laws for light-weight optics
NASA Technical Reports Server (NTRS)
Valente, Tina M.
1990-01-01
Scaling laws for light-weight optical systems are examined. A cubic relationship between mirror diameter and weight has been suggested and used by many designers of optical systems as the best description for all light-weight mirrors. A survey of existing light-weight systems in the open literature has been made to clarify this issue. Fifty existing optical systems were surveyed with all varieties of light-weight mirrors including glass and beryllium structured mirrors, contoured mirrors, and very thin solid mirrors. These mirrors were then categorized and weight to diameter ratio was plotted to find a best fit curve for each case. A best fitting curve program tests nineteen different equations and ranks a 'goodness of fit' for each of these equations. The resulting relationship found for each light-weight mirror category helps to quantify light-weight optical systems and methods of fabrication and provides comparisons between mirror types.
Evaluation of Delamination Onset and Growth Characterization Methods under Mode I Fatigue Loading
NASA Technical Reports Server (NTRS)
Murri, Gretchen B.
2013-01-01
Double-cantilevered beam specimens of IM7/8552 graphite/epoxy from two different manufacturers were tested in static and fatigue to compare the material characterization data and to evaluate a proposed ASTM standard for generating Paris Law equations for delamination growth. Static results were used to generate compliance calibration constants for reducing the fatigue data, and a delamination resistance curve, GIR, for each material. Specimens were tested in fatigue at different initial cyclic GImax levels to determine a delamination onset curve and the delamination growth rate. The delamination onset curve equations were similar for the two sources. Delamination growth rate was calculated by plotting da/dN versus GImax on a log-log scale and fitting a Paris Law. Two different data reduction methods were used to calculate da/dN. To determine the effects of fiber-bridging, growth results were normalized by the delamination resistance curves. Paris Law exponents decreased by 31% to 37% after normalizing the data. Visual data records from the fatigue tests were used to calculate individual compliance constants from the fatigue data. The resulting da/dN versus GImax plots showed improved repeatability for each source, compared to using averaged static data. The Paris Law expressions for the two sources showed the closest agreement using the individually fit compliance data.
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F
In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less
Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.
2006-01-01
The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.
A direct potential fitting RKR method: Semiclassical vs. quantal comparisons
NASA Astrophysics Data System (ADS)
Tellinghuisen, Joel
2016-12-01
Quantal and semiclassical (SC) eigenvalues are compared for three diatomic molecular potential curves: the X state of CO, the X state of Rb2, and the A state of I2. The comparisons show higher levels of agreement than generally recognized, when the SC calculations incorporate a quantum defect correction to the vibrational quantum number, in keeping with the Kaiser modification. One particular aspect of this is better agreement between quantal and SC estimates of the zero-point vibrational energy, supporting the need for the Y00 correction in this context. The pursuit of a direct-potential-fitting (DPF) RKR method is motivated by the notion that some of the limitations of RKR potentials may be innate, from their generation by an exact inversion of approximate quantities: the vibrational energy Gυ and rotational constant Bυ from least-squares analysis of spectroscopic data. In contrast, the DPF RKR method resembles the quantal DPF methods now increasingly used to analyze diatomic spectral data, but with the eigenvalues obtained from SC phase integrals. Application of this method to the analysis of 9500 assigned lines in the I2A ← X spectrum fails to alter the quantal-SC disparities found for the A-state RKR curve from a previous analysis. On the other hand, the SC method can be much faster than the quantal method in exploratory work with different potential functions, where it is convenient to use finite-difference methods to evaluate the partial derivatives required in nonlinear fitting.
A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pejcha, Ondřej; Prieto, Jose L., E-mail: pejcha@astro.princeton.edu
2015-02-01
We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles resultmore » in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.« less
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
Evaluation of Hamaker coefficients using Diffusion Monte Carlo method
NASA Astrophysics Data System (ADS)
Maezono, Ryo; Hongo, Kenta
We evaluated the Hamaker's constant for Cyclohexasilane to investigate its wettability, which is used as an ink of 'liquid silicon' in 'printed electronics'. Taking three representative geometries of the dimer coalescence (parallel, lined, and T-shaped), we evaluated these binding curves using diffusion Monte Carlo method. The parallel geometry gave the most long-ranged exponent, ~ 1 /r6 , in its asymptotic behavior. Evaluated binding lengths are fairly consistent with the experimental density of the molecule. The fitting of the asymptotic curve gave an estimation of Hamaker's constant being around 100 [zJ]. We also performed a CCSD(T) evaluation and got almost similar result. To check its justification, we applied the same scheme to Benzene and compared the estimation with those by other established methods, Lifshitz theory and SAPT (Symmetry-adopted perturbation theory). The result by the fitting scheme turned to be twice larger than those by Lifshitz and SAPT, both of which coincide with each other. It is hence implied that the present evaluation for Cyclohexasilane would be overestimated.
On Correlated-noise Analyses Applied to Exoplanet Light Curves
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison
2017-01-01
Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
Toward a Micro-Scale Acoustic Direction-Finding Sensor with Integrated Electronic Readout
2013-06-01
measurements with curve fits . . . . . . . . . . . . . . . 20 Figure 2.10 Failure testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22...2.1 Sensor parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Table 2.2 Curve fit parameters...elastic, the quantity of interest is the elastic stiffness. In a typical nanoindentation test, the loading curve is nonlinear due to combined plastic
Transonic Compressor: Program System TXCO for Data Acquisition and On-Line Reduction.
1980-10-01
IMONIDAYIYEARIHOUR,IMINISEC) OS16 C ............................................................... (0S17 C 0SiB C Gel dole ond line and convert the...linear curve fits SECON real intercept of linear curve fit (as from CURVE) 65 - . FLOW CHART SUBROUTINE CALIB - - - Aso C’A / oonre& *Go wSAt*irc
Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant
2015-01-01
In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Comparison of three methods for wind turbine capacity factor estimation.
Ditkovich, Y; Kuperman, A
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.
2010-04-01
of radiolabeling fusion proteins without the denaturing effects coincident with oxidative radio-iodination associated with the chloramine T method...organ PS product = [(%ID/g)/AUC]*1000 Reportable Outcomes (1) The plasma concentration decay curve for AGT-185 is shown in Figure 1. The % of...injected dose (ID)/mL decreases rapidly in plasma following IV injection. This plasma decay curve was fit to the bi-exponential equation described above
Soil Water Characteristics of Cores from Low- and High-Centered Polygons, Barrow, Alaska, 2012
Graham, David; Moon, Ji-Won
2016-08-22
This dataset includes soil water characteristic curves for soil and permafrost in two representative frozen cores collected from a high-center polygon (HCP) and a low-center polygon (LCP) from the Barrow Environmental Observatory. Data include soil water content and soil water potential measured using the simple evaporation method for hydrological and biogeochemical simulations and experimental data analysis. Data can be used to generate a soil moisture characteristic curve, which can be fit to a variety of hydrological functions to infer critical parameters for soil physics. Considering the measured the soil water properties, the van Genuchten model predicted well the HCP, in contrast, the Kosugi model well fitted LCP which had more saturated condition.
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
Tensile stress-strain behavior of graphite/epoxy laminates
NASA Technical Reports Server (NTRS)
Garber, D. P.
1982-01-01
The tensile stress-strain behavior of a variety of graphite/epoxy laminates was examined. Longitudinal and transverse specimens from eleven different layups were monotonically loaded in tension to failure. Ultimate strength, ultimate strain, and strss-strain curves wee obtained from four replicate tests in each case. Polynominal equations were fitted by the method of least squares to the stress-strain data to determine average curves. Values of Young's modulus and Poisson's ratio, derived from polynomial coefficients, were compared with laminate analysis results. While the polynomials appeared to accurately fit the stress-strain data in most cases, the use of polynomial coefficients to calculate elastic moduli appeared to be of questionable value in cases involving sharp changes in the slope of the stress-strain data or extensive scatter.
Materials and Modulators for 3D Displays
2002-08-01
1243 nm. 0, 180 and 360 deg. in this plot correspond to parallel polarization. The dashed curve is a cos2(θ) fit to the data with a constant value...dwell time (solid bold curve ), 10 µs dwell time (dashed bold curve ) and static case (thin dashed curve ). 26 Figure. 20. Schematics of free-space...photon. The two peaks in the two photon spectrum can be fit by two Lorentzian curves . These spectra indicate that in the rhodamine B molecule the
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Li, Hui
2009-03-01
To construct the growth standardized data and curves based on weight, length/height, head circumference for Chinese children under 7 years of age. Random cluster sampling was used. The fourth national growth survey of children under 7 years in the nine cities (Beijing, Harbin, Xi'an, Shanghai, Nanjing, Wuhan, Fuzhou, Guangzhou and Kunming) of China was performed in 2005 and from this survey, data of 69 760 urban healthy boys and girls were used to set up the database for weight-for-age, height-for-age (length was measured for children under 3 years) and head circumference-for-age. Anthropometric data were ascribed to rigorous methods of data collection and standardized procedures across study sites. LMS method based on BOX-COX normal transformation and cubic splines smoothing technique was chosen for fitting the raw data according to study design and data features, and standardized values of any percentile and standard deviation were obtained by the special formulation of L, M and S parameters. Length-for-age and height-for-age standards were constructed by fitting the same model but the final curves reflected the 0.7 cm average difference between these two measurements. A set of systematic diagnostic tools was used to detect possible biases in estimated percentiles or standard deviation curves, including chi2 test, which was used for reference to evaluate to the goodness of fit. The 3rd, 10th, 25th, 50th, 75th, 90th, 97th smoothed percentiles and -3, -2, -1, 0, +1, +2, +3 SD values and curves of weight-for-age, length/height-for-age and head circumference-for-age for boys and girls aged 0-7 years were made out respectively. The Chinese child growth charts was slightly higher than the WHO child growth standards. The newly established growth charts represented the growth level of healthy and well-nourished Chinese children. The sample size was very large and national, the data were high-quality and the smoothing method was internationally accepted. The new Chinese growth charts are recommended as the Chinese child growth standards in 21st century used in China.
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
A New Model Based on Adaptation of the External Loop to Compensate the Hysteresis of Tactile Sensors
Sánchez-Durán, José A.; Vidal-Verdú, Fernando; Oballe-Peinado, Óscar; Castellanos-Ramos, Julián; Hidalgo-López, José A.
2015-01-01
This paper presents a novel method to compensate for hysteresis nonlinearities observed in the response of a tactile sensor. The External Loop Adaptation Method (ELAM) performs a piecewise linear mapping of the experimentally measured external curves of the hysteresis loop to obtain all possible internal cycles. The optimal division of the input interval where the curve is approximated is provided by the error minimization algorithm. This process is carried out off line and provides parameters to compute the split point in real time. A different linear transformation is then performed at the left and right of this point and a more precise fitting is achieved. The models obtained with the ELAM method are compared with those obtained from three other approaches. The results show that the ELAM method achieves a more accurate fitting. Moreover, the involved mathematical operations are simpler and therefore easier to implement in devices such as Field Programmable Gate Array (FPGAs) for real time applications. Furthermore, the method needs to identify fewer parameters and requires no previous selection process of operators or functions. Finally, the method can be applied to other sensors or actuators with complex hysteresis loop shapes. PMID:26501279
Baseline Estimation and Outlier Identification for Halocarbons
NASA Astrophysics Data System (ADS)
Wang, D.; Schuck, T.; Engel, A.; Gallman, F.
2017-12-01
The aim of this paper is to build a baseline model for halocarbons and to statistically identify the outliers under specific conditions. In this paper, time series of regional CFC-11 and Chloromethane measurements was discussed, which taken over the last 4 years at two locations, including a monitoring station at northwest of Frankfurt am Main (Germany) and Mace Head station (Ireland). In addition to analyzing time series of CFC-11 and Chloromethane, more importantly, a statistical approach of outlier identification is also introduced in this paper in order to make a better estimation of baseline. A second-order polynomial plus harmonics are fitted to CFC-11 and chloromethane mixing ratios data. Measurements with large distance to the fitting curve are regard as outliers and flagged. Under specific requirement, the routine is iteratively adopted without the flagged measurements until no additional outliers are found. Both model fitting and the proposed outlier identification method are realized with the help of a programming language, Python. During the period, CFC-11 shows a gradual downward trend. And there is a slightly upward trend in the mixing ratios of Chloromethane. The concentration of chloromethane also has a strong seasonal variation, mostly due to the seasonal cycle of OH. The usage of this statistical method has a considerable effect on the results. This method efficiently identifies a series of outliers according to the standard deviation requirements. After removing the outliers, the fitting curves and trend estimates are more reliable.
2017-11-01
sent from light-emitting diodes (LEDs) of 5 colors ( green , red, white, amber, and blue). Experiment 1 involved controlled laboratory measurements of...A-4 Red LED calibration curves and quadratic curve fits with R2 values . 37 Fig. A-5 Green LED calibration curves and quadratic curve fits with R2...36 Table A-4 Red LED calibration measurements ................................................... 36 Table A-5 Green LED
Comparison of Primary Models to Predict Microbial Growth by the Plate Count and Absorbance Methods.
Pla, María-Leonor; Oltra, Sandra; Esteban, María-Dolores; Andreu, Santiago; Palop, Alfredo
2015-01-01
The selection of a primary model to describe microbial growth in predictive food microbiology often appears to be subjective. The objective of this research was to check the performance of different mathematical models in predicting growth parameters, both by absorbance and plate count methods. For this purpose, growth curves of three different microorganisms (Bacillus cereus, Listeria monocytogenes, and Escherichia coli) grown under the same conditions, but with different initial concentrations each, were analysed. When measuring the microbial growth of each microorganism by optical density, almost all models provided quite high goodness of fit (r(2) > 0.93) for all growth curves. The growth rate remained approximately constant for all growth curves of each microorganism, when considering one growth model, but differences were found among models. Three-phase linear model provided the lowest variation for growth rate values for all three microorganisms. Baranyi model gave a variation marginally higher, despite a much better overall fitting. When measuring the microbial growth by plate count, similar results were obtained. These results provide insight into predictive microbiology and will help food microbiologists and researchers to choose the proper primary growth predictive model.
Aigner, Z; Berkesi, O; Farkas, G; Szabó-Révész, P
2012-01-05
The steps of formation of an inclusion complex produced by the co-grinding of gemfibrozil and dimethyl-β-cyclodextrin were investigated by differential scanning calorimetry (DSC), X-ray powder diffractometry (XRPD) and Fourier transform infrared (FTIR) spectroscopy with curve-fitting analysis. The endothermic peak at 59.25°C reflecting the melting of gemfibrozil progressively disappeared from the DSC curves of the products on increase of the duration of co-grinding. The crystallinity of the samples too gradually decreased, and after 35min of co-grinding the product was totally amorphous. Up to this co-grinding time, XRPD and FTIR investigations indicated a linear correlation between the cyclodextrin complexation and the co-grinding time. After co-grinding for 30min, the ratio of complex formation did not increase. These studies demonstrated that co-grinding is a suitable method for the complexation of gemfibrozil with dimethyl-β-cyclodextrin. XRPD analysis revealed the amorphous state of the gemfibrozil-dimethyl-β-cyclodextrin product. FTIR spectroscopy with curve-fitting analysis may be useful as a semiquantitative analytical method for discriminating the molecular and amorphous states of gemfibrozil. Copyright © 2011 Elsevier B.V. All rights reserved.
PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less
Methods to assess an exercise intervention trial based on 3-level functional data.
Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J
2015-10-01
Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue
2017-08-01
On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.
Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly
Yao, Jian; Levine, Judah; Weiss, Marc
2015-01-01
The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is “day boundary discontinuity,” which has been studied extensively and can be solved by multiple methods [1–8]. The other category of discontinuity, called “anomaly boundary discontinuity (anomaly-BD),” comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as “jamming”, can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer. PMID:26958451
NASA Astrophysics Data System (ADS)
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less
ON THE ROTATION SPEED OF THE MILKY WAY DETERMINED FROM H i EMISSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, M. J.; Dame, T. M.
2016-12-01
The circular rotation speed of the Milky Way at the solar radius, Θ{sub 0}, has been estimated to be 220 km s{sup −1} by fitting the maximum velocity of H i emission as a function of Galactic longitude. This result is in tension with a recent estimate of Θ{sub 0} = 240 km s{sup −1}, based on Very Long Baseline Interferometry (VLBI) parallaxes and proper motions from the BeSSeL and VERA surveys for large numbers of high-mass star-forming regions across the Milky Way. We find that the rotation curve best fitted to the VLBI data is slightly curved, and that this curvaturemore » results in a biased estimate of Θ{sub 0} from the H i data when a flat rotation curve is assumed. This relieves the tension between the methods and favors Θ{sub 0} = 240 km s{sup −1}.« less
A new methodology for free wake analysis using curved vortex elements
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Teske, Milton E.; Quackenbush, Todd R.
1987-01-01
A method using curved vortex elements was developed for helicopter rotor free wake calculations. The Basic Curve Vortex Element (BCVE) is derived from the approximate Biot-Savart integration for a parabolic arc filament. When used in conjunction with a scheme to fit the elements along a vortex filament contour, this method has a significant advantage in overall accuracy and efficiency when compared to the traditional straight-line element approach. A theoretical and numerical analysis shows that free wake flows involving close interactions between filaments should utilize curved vortex elements in order to guarantee a consistent level of accuracy. The curved element method was implemented into a forward flight free wake analysis, featuring an adaptive far wake model that utilizes free wake information to extend the vortex filaments beyond the free wake regions. The curved vortex element free wake, coupled with this far wake model, exhibited rapid convergence, even in regions where the free wake and far wake turns are interlaced. Sample calculations are presented for tip vortex motion at various advance ratios for single and multiple blade rotors. Cross-flow plots reveal that the overall downstream wake flow resembles a trailing vortex pair. A preliminary assessment shows that the rotor downwash field is insensitive to element size, even for relatively large curved elements.
Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies
NASA Astrophysics Data System (ADS)
Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.
2017-12-01
Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark matter models. We find that the pure baryonic model with fitted M/L ratios falls within the 1σ confidence level for 10 HSB and 2 LSB galaxies, at the price of growing the M/Ls on average by a factor of two, but the fits are inferior compared to the best-fitting dark matter model.
ERIC Educational Resources Information Center
Campo, Antonio; Rodriguez, Franklin
1998-01-01
Presents two alternative computational procedures for solving the modified Bessel equation of zero order: the Frobenius method, and the power series method coupled with a curve fit. Students in heat transfer courses can benefit from these alternative procedures; a course on ordinary differential equations is the only mathematical background that…
NASA Astrophysics Data System (ADS)
Magri, Alphonso William
This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue
2017-07-01
Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.
Biological growth functions describe published site index curves for Lake States timber species.
Allen L. Lundgren; William A. Dolid
1970-01-01
Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...
A century of enzyme kinetic analysis, 1913 to 2013.
Johnson, Kenneth A
2013-09-02
This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
Prediction Analysis for Measles Epidemics
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi; Olsen, Lars Folke; Kobayashi, Nobumichi
2003-12-01
A newly devised procedure of prediction analysis, which is a linearized version of the nonlinear least squares method combined with the maximum entropy spectral analysis method, was proposed. This method was applied to time series data of measles case notification in several communities in the UK, USA and Denmark. The dominant spectral lines observed in each power spectral density (PSD) can be safely assigned as fundamental periods. The optimum least squares fitting (LSF) curve calculated using these fundamental periods can essentially reproduce the underlying variation of the measles data. An extension of the LSF curve can be used to predict measles case notification quantitatively. Some discussions including a predictability of chaotic time series are presented.
Triangulation of multistation camera data to locate a curved line in space
NASA Technical Reports Server (NTRS)
Fricke, C. L.
1974-01-01
A method is described for finding the location of a curved line in space from local azimuth as a function of elevation data obtained at several observation sites. A least-squares criterion is used to insure the best fit to the data. The method is applicable to the triangulation of an object having no identifiable structural features, provided its width is very small compared with its length so as to approximate a line in space. The method was implemented with a digital computer program and was successfully applied to data obtained from photographs of a barium ion cloud which traced out the earth's magnetic field line at very high altitudes.
Predicting long-term graft survival in adult kidney transplant recipients.
Pinsky, Brett W; Lentine, Krista L; Ercole, Patrick R; Salvalaggio, Paolo R; Burroughs, Thomas E; Schnitzler, Mark A
2012-07-01
The ability to accurately predict a population's long-term survival has important implications for quantifying the benefits of transplantation. To identify a model that can accurately predict a kidney transplant population's long-term graft survival, we retrospectively studied the United Network of Organ Sharing data from 13,111 kidney-only transplants completed in 1988- 1989. Nineteen-year death-censored graft survival (DCGS) projections were calculated and compared with the population's actual graft survival. The projection curves were created using a two-part estimation model that (1) fits a Kaplan-Meier survival curve immediately after transplant (Part A) and (2) uses truncated observational data to model a survival function for long-term projection (Part B). Projection curves were examined using varying amounts of time to fit both parts of the model. The accuracy of the projection curve was determined by examining whether predicted survival fell within the 95% confidence interval for the 19-year Kaplan-Meier survival, and the sample size needed to detect the difference in projected versus observed survival in a clinical trial. The 19-year DCGS was 40.7% (39.8-41.6%). Excellent predictability (41.3%) can be achieved when Part A is fit for three years and Part B is projected using two additional years of data. Using less than five total years of data tended to overestimate the population's long-term survival, accurate prediction of long-term DCGS is possible, but requires attention to the quantity data used in the projection method.
2011-09-01
with the bilinear plasticity relation. We used the bilinear relation, which allowed a full range of hardening from isotropic to kinematic to be...43 Table 12. Verification of the Weight Function Method for Single Corner Crack at a Hole in an Infinite ...determine the “Young’s Modulus,” or the slope of the linear region of the curve, the experimental data is curve fit with
Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel
Shanhua, Xu; Songbo, Ren; Youde, Wang
2015-01-01
To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel. PMID:26121468
Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel.
Shanhua, Xu; Songbo, Ren; Youde, Wang
2015-01-01
To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel.
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Magalhaes, A. M.; Coyne, G. V.
1995-01-01
We study the dust in the Small Magellanic Cloud using our polarization and extinction data (Paper 1) and existing dust models. The data suggest that the monotonic SMC extinction curve is related to values of lambda(sub max), the wavelength of maximum polarization, which are on the average smaller than the mean for the Galaxy. On the other hand, AZV 456, a star with an extinction similar to that for the Galaxy, shows a value of lambda(sub max) similar to the mean for the Galaxy. We discuss simultaneous dust model fits to extinction and polarization. Fits to the wavelength dependent polarization data are possible for stars with small lambda(sub max). In general, they imply dust size distributions which are narrower and have smaller mean sizes compared to typical size distributions for the Galaxy. However, stars with lambda(sub max) close to the Galactic norm, which also have a narrower polarization curve, cannot be fit adequately. This holds true for all of the dust models considered. The best fits to the extinction curves are obtained with a power law size distribution by assuming that the cylindrical and spherical silicate grains have a volume distribution which is continuous from the smaller spheres to the larger cylinders. The size distribution for the cylinders is taken from the fit to the polarization. The 'typical', monotonic SMC extinction curve can be fit well with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grain. However, amorphous carbon and silicate grains also fit the data well. AZV456, which has an extinction curve similar to that for the Galaxy, has a UV bump which is too blue to be fit by spherical graphite grains.
Comparison of Three Methods for Wind Turbine Capacity Factor Estimation
Ditkovich, Y.; Kuperman, A.
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755
Wei, Qiuning; Wei, Yuan; Liu, Fangfang; Ding, Yalei
2015-10-01
To investigate the method for uncertainty evaluation of determination of tin and its compounds in the air of workplace by flame atomic absorption spectrometry. The national occupational health standards, GBZ/T160.28-2004 and JJF1059-1999, were used to build a mathematical model of determination of tin and its compounds in the air of workplace and to calculate the components of uncertainty. In determination of tin and its compounds in the air of workplace using flame atomic absorption spectrometry, the uncertainty for the concentration of the standard solution, atomic absorption spectrophotometer, sample digestion, parallel determination, least square fitting of the calibration curve, and sample collection was 0.436%, 0.13%, 1.07%, 1.65%, 3.05%, and 2.89%, respectively. The combined uncertainty was 9.3%.The concentration of tin in the test sample was 0.132 mg/m³, and the expanded uncertainty for the measurement was 0.012 mg/m³ (K=2). The dominant uncertainty for determination of tin and its compounds in the air of workplace comes from least squares fitting of the calibration curve and sample collection. Quality control should be improved in the process of calibration curve fitting and sample collection.
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Global search in photoelectron diffraction structure determination using genetic algorithms
NASA Astrophysics Data System (ADS)
Viana, M. L.; Díez Muiño, R.; Soares, E. A.; Van Hove, M. A.; de Carvalho, V. E.
2007-11-01
Photoelectron diffraction (PED) is an experimental technique widely used to perform structural determinations of solid surfaces. Similarly to low-energy electron diffraction (LEED), structural determination by PED requires a fitting procedure between the experimental intensities and theoretical results obtained through simulations. Multiple scattering has been shown to be an effective approach for making such simulations. The quality of the fit can be quantified through the so-called R-factor. Therefore, the fitting procedure is, indeed, an R-factor minimization problem. However, the topography of the R-factor as a function of the structural and non-structural surface parameters to be determined is complex, and the task of finding the global minimum becomes tough, particularly for complex structures in which many parameters have to be adjusted. In this work we investigate the applicability of the genetic algorithm (GA) global optimization method to this problem. The GA is based on the evolution of species, and makes use of concepts such as crossover, elitism and mutation to perform the search. We show results of its application in the structural determination of three different systems: the Cu(111) surface through the use of energy-scanned experimental curves; the Ag(110)-c(2 × 2)-Sb system, in which a theory-theory fit was performed; and the Ag(111) surface for which angle-scanned experimental curves were used. We conclude that the GA is a highly efficient method to search for global minima in the optimization of the parameters that best fit the experimental photoelectron diffraction intensities to the theoretical ones.
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
[Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].
Zhu, Chun; Zhang, Xu
2010-10-01
Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.
A synthetic method of solar spectrum based on LED
NASA Astrophysics Data System (ADS)
Wang, Ji-qiang; Su, Shi; Zhang, Guo-yu; Zhang, Jian
2017-10-01
A synthetic method of solar spectrum which based on the spectral characteristics of the solar spectrum and LED, and the principle of arbitrary spectral synthesis was studied by using 14 kinds of LED with different central wavelengths.The LED and solar spectrum data were selected by Origin Software firstly, then calculated the total number of LED for each center band by the transformation relation between brightness and illumination and Least Squares Curve Fit in Matlab.Finally, the spectrum curve of AM1.5 standard solar spectrum was obtained. The results met the technical indexes of the solar spectrum matching with ±20% and the solar constant with >0.5.
Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution
NASA Astrophysics Data System (ADS)
Zhao, X.; Suganuma, Y.; Fujii, M.
2017-12-01
Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.
Finding Planets in K2: A New Method of Cleaning the Data
NASA Astrophysics Data System (ADS)
Currie, Miles; Mullally, Fergal; Thompson, Susan E.
2017-01-01
We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.
Space-Based Observation Technology
2000-10-01
Conan, V. Michau, and S. Salem . Regularized multiframe myopic deconvolution from wavefront sensing. In Propagation through the Atmosphere III...specified false alarm rate PFA . Proceeding with curving fitting, one obtains a best-fit curve “10.1y14.2 - 0.2” as the detector for the target
Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C
2017-08-01
Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over time.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
SU-G-206-17: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Rutel, I; Yang, K
2016-06-15
Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semiautomated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less
SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Rutel, I; Wu, D
Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhong, Jiaqi; Song, Hongwei; Zhu, Lei; Wang, Jin; Zhan, Mingsheng
2014-08-01
Vibrational noise is one of the most important noises that limits the performance of the nonisotopes atom-interferometers (AIs) -based weak-equivalence-principle (WEP) -test experiment. By analyzing the vibration-induced phases, we find that, although the induced phases are not completely common, their ratio is always a constant at every experimental data point, which is not fully utilized in the traditional elliptic curve-fitting method. From this point, we propose a strategy that can greatly suppress the vibration-induced phase noise by stabilizing the Raman laser frequencies at high precision and controlling the scanning-phase ratio. The noise rejection ratio can be as high as 1015 with arbitrary dual-species AIs. Our method provides a Lissajous curve, and the shape of the curve indicates the breakdown of the weak-equivalence-principle signal. Then we manage to derive an estimator for the differential phase of the Lissajous curve. This strategy could be helpful in extending the candidates of atomic species for high-precision AIs-based WEP-test experiments.
CONFIRMATION OF HOT JUPITER KEPLER-41b VIA PHASE CURVE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quintana, Elisa V.; Rowe, Jason F.; Caldwell, Douglas A.
We present high precision photometry of Kepler-41, a giant planet in a 1.86 day orbit around a G6V star that was recently confirmed through radial velocity measurements. We have developed a new method to confirm giant planets solely from the photometric light curve, and we apply this method herein to Kepler-41 to establish the validity of this technique. We generate a full phase photometric model by including the primary and secondary transits, ellipsoidal variations, Doppler beaming, and reflected/emitted light from the planet. Third light contamination scenarios that can mimic a planetary transit signal are simulated by injecting a full rangemore » of dilution values into the model, and we re-fit each diluted light curve model to the light curve. The resulting constraints on the maximum occultation depth and stellar density combined with stellar evolution models rules out stellar blends and provides a measurement of the planet's mass, size, and temperature. We expect about two dozen Kepler giant planets can be confirmed via this method.« less
Demonstrations in Solute Transport Using Dyes: Part II. Modeling.
ERIC Educational Resources Information Center
Butters, Greg; Bandaranayake, Wije
1993-01-01
A solution of the convection-dispersion equation is used to describe the solute breakthrough curves generated in the demonstrations in the companion paper. Estimation of the best fit model parameters (solute velocity, dispersion, and retardation) is illustrated using the method of moments for an example data set. (Author/MDH)
Planar Cubics Through a Point in a Direction
NASA Technical Reports Server (NTRS)
Chou, J. J.; Blake, M. W.
1993-01-01
It is shown that the planar cubics through three points and the associated tangent directions can be found by solving a cubic equation and a 2 x 2 system of linear equations. The result is combined with a previous published scheme to produce a better curve-fitting method.
NASA Astrophysics Data System (ADS)
Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.
2007-12-01
An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.
2014-01-01
Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860
Isotonic Regression Based-Method in Quantitative High-Throughput Screenings for Genotoxicity
Fujii, Yosuke; Narita, Takeo; Tice, Raymond Richard; Takeda, Shunich
2015-01-01
Quantitative high-throughput screenings (qHTSs) for genotoxicity are conducted as part of comprehensive toxicology screening projects. The most widely used method is to compare the dose-response data of a wild-type and DNA repair gene knockout mutants, using model-fitting to the Hill equation (HE). However, this method performs poorly when the observed viability does not fit the equation well, as frequently happens in qHTS. More capable methods must be developed for qHTS where large data variations are unavoidable. In this study, we applied an isotonic regression (IR) method and compared its performance with HE under multiple data conditions. When dose-response data were suitable to draw HE curves with upper and lower asymptotes and experimental random errors were small, HE was better than IR, but when random errors were big, there was no difference between HE and IR. However, when the drawn curves did not have two asymptotes, IR showed better performance (p < 0.05, exact paired Wilcoxon test) with higher specificity (65% in HE vs. 96% in IR). In summary, IR performed similarly to HE when dose-response data were optimal, whereas IR clearly performed better in suboptimal conditions. These findings indicate that IR would be useful in qHTS for comparing dose-response data. PMID:26673567
Zou, Kelly H; Resnic, Frederic S; Talos, Ion-Florin; Goldberg-Zimring, Daniel; Bhagwat, Jui G; Haker, Steven J; Kikinis, Ron; Jolesz, Ferenc A; Ohno-Machado, Lucila
2005-10-01
Medical classification accuracy studies often yield continuous data based on predictive models for treatment outcomes. A popular method for evaluating the performance of diagnostic tests is the receiver operating characteristic (ROC) curve analysis. The main objective was to develop a global statistical hypothesis test for assessing the goodness-of-fit (GOF) for parametric ROC curves via the bootstrap. A simple log (or logit) and a more flexible Box-Cox normality transformations were applied to untransformed or transformed data from two clinical studies to predict complications following percutaneous coronary interventions (PCIs) and for image-guided neurosurgical resection results predicted by tumor volume, respectively. We compared a non-parametric with a parametric binormal estimate of the underlying ROC curve. To construct such a GOF test, we used the non-parametric and parametric areas under the curve (AUCs) as the metrics, with a resulting p value reported. In the interventional cardiology example, logit and Box-Cox transformations of the predictive probabilities led to satisfactory AUCs (AUC=0.888; p=0.78, and AUC=0.888; p=0.73, respectively), while in the brain tumor resection example, log and Box-Cox transformations of the tumor size also led to satisfactory AUCs (AUC=0.898; p=0.61, and AUC=0.899; p=0.42, respectively). In contrast, significant departures from GOF were observed without applying any transformation prior to assuming a binormal model (AUC=0.766; p=0.004, and AUC=0.831; p=0.03), respectively. In both studies the p values suggested that transformations were important to consider before applying any binormal model to estimate the AUC. Our analyses also demonstrated and confirmed the predictive values of different classifiers for determining the interventional complications following PCIs and resection outcomes in image-guided neurosurgery.
Adsorption of trichloroethylene and benzene vapors onto hypercrosslinked polymeric resin.
Liu, Peng; Long, Chao; Li, Qifen; Qian, Hongming; Li, Aimin; Zhang, Quanxing
2009-07-15
In this research, the adsorption equilibria of trichloroethylene (TCE) and benzene vapors onto hypercrosslinked polymeric resin (NDA201) were investigated by the column adsorption method in the temperature range from 303 to 333 K and pressures up to 8 kPa for TCE, 12 kPa for benzene. The Toth and Dubinin-Astakov (D-A) equations were tested to correlate experimental isotherms, and the experimental data were found to fit well by them. The good fits and characteristic curves of D-A equation provided evidence that a pore-filling phenomenon was involved during the adsorption of TCE and benzene onto NDA-201. Moreover, thermodynamic properties such as the Henry's constant and the isosteric enthalpy of adsorption were calculated. The isosteric enthalpy curves varied with the surface loading for each adsorbate, indicating that the hypercrosslinked polymeric resin has an energetically heterogeneous surface. In addition, a simple mathematic model developed by Yoon and Nelson was applied to investigate the breakthrough behavior on a hypercrosslinked polymeric resin column at 303 K and the calculated breakthrough curves were in high agreement with corresponding experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelini, G.; Lanza, E.; Rozza Dionigi, A.
1983-05-01
The measurement of cerebral blood flow (CBF) by the extracranial detection of the radioactivity of /sup 133/Xe injected into an internal carotid artery has proved to be of considerable value for the investigation of cerebral circulation in conscious rabbits. Methods are described for calculating CBF from the curves of clearance of /sup 133/Xe, and include exponential analysis (two-component model), initial slope, and stochastic method. The different methods of curve analysis were compared in order to evaluate the fitness with the theoretical model. The initial slope and stochastic methods, compared with the biexponential model, underestimate the CBF by 35% and 46%more » respectively. Furthermore, the validity of recording the clearance curve for 10 min was tested by comparing these CBF values with those obtained from the whole curve. CBF values calculated with the shortened procedure are overestimated by 17%. A correlation exists between the ''10 min'' CBF values and the CBF calculated from the whole curve; in spite of that, the values are not accurate for limited animal populations or for single animals. The extent of the two main compartments into which the CBF is divided was also measured. There is no correlation between CBF values and the extent of the relative compartment. This fact suggests that these two parameters correspond to different biological entities.« less
SU-F-T-477: Investigation of DEFGEL Dosimetry Using MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matrosic, C; McMillan, A; Bednarz, B
Purpose: The DEFGEL dosimeter/phantom allows for the measurement of 3D dose distributions while maintaining tissue equivalence and deformability. Although DEFGEL is traditionally read out with optical CT, the use of MRI would permit the measurement of 3D dose distributions in optically interfering configurations, like while embedded in a phantom. To the knowledge of the authors, this work is the first investigation that uses MRI to measure dose distributions in DEFGEL dosimeters. Methods: The DEFGEL (6%T) formula was used to create 1 cm thick, 4.5 cm diameter cylindrical dosimeters. The dosimeters were irradiated using a Varian Clinac 21EX linac. The MRImore » based transverse relaxation rate (R2) of the gel was measured in a central slice of the dosimeter with a Spin-Echo (SE) pulse sequence on a 3T GE SIGNA PET/MR scanner. The R2 values were fit to a monoexponential dose response equation using in-house software (MATLAB). Results: The data was well fit using a monoexponential fit for R2 as a function of absorbed dose (R{sup 2} = 0.9997). The fitting parameters of the monoexponential fit resulted in a 0.1229 Gy{sub −1}s{sub −1} slope. The data also resulted in an average standard deviation of 1.8% for the R2 values within the evaluated ROI. Conclusion: The close fit for the dose response curve shows that a DEFGEL based dosimeter can be paired with a SE MRI acquisition. The Type A uncertainty of the MRI method shows adequate precision, while the slope of the fit curve is large enough that R2 differences between different gel doses are distinguishable. These results suggest that the gel could potentially be used in configurations where an optical readout is not viable, such as measurements with the gel dosimeter positioned inside larger or optically opaque phantoms. This work is partially funded by NIH grant R01CA190298.« less
Dust in the Small Magellanic Cloud
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Coyne, G. V.; Magalhaes, A. M.
1995-01-01
We discuss simultaneous dust model fits to our extinction and polarization data for the Small Magellanic Cloud (SMC) using existing dust models. Dust model fits to the wavelength dependent polarization are possible for stars with small lambda(sub max). They generally imply size distributions which are narrower and have smaller average sizes compared to those in the Galaxy. The best fits for the extinction curves are obtained with a power law size distribution. The typical, monotonic SMC extinction curve can be well fit with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grains. Amorphous carbon and silicate grains also fit the data well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, C.; Aldering, G.; Aragon, C.
2015-02-10
We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a givenmore » supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.« less
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Baumgaertner, A.
2016-07-01
We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.
Milky Way Kinematics. II. A Uniform Inner Galaxy H I Terminal Velocity Curve
NASA Astrophysics Data System (ADS)
McClure-Griffiths, N. M.; Dickey, John M.
2016-11-01
Using atomic hydrogen (H I) data from the VLA Galactic Plane Survey, we measure the H I terminal velocity as a function of longitude for the first quadrant of the Milky Way. We use these data, together with our previous work on the fourth Galactic quadrant, to produce a densely sampled, uniformly measured, rotation curve of the northern and southern Milky Way between 3 {kpc}\\lt R\\lt 8 {kpc}. We determine a new joint rotation curve fit for the first and fourth quadrants, which is consistent with the fit we published in McClure-Griffiths & Dickey and can be used for estimating kinematic distances interior to the solar circle. Structure in the rotation curves is now exquisitely well defined, showing significant velocity structure on lengths of ˜200 pc, which is much greater than the spatial resolution of the rotation curve. Furthermore, the shape of the rotation curves for the first and fourth quadrants, even after subtraction of a circular rotation fit shows a surprising degree of correlation with a roughly sinusoidal pattern between 4.2\\lt R\\lt 7 kpc.
ERIC Educational Resources Information Center
Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)
Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting
NASA Astrophysics Data System (ADS)
Nanzad, Bolorchimeg
This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and gamma parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-05-01
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, C; Jiang, R; Chow, J
2015-06-15
Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describingmore » the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system.« less
Growth curves for ostriches (Struthio camelus) in a Brazilian population.
Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P
2013-01-01
The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.
Analysis on the vegetation phenology of tropical seasonal rain forest in South America
NASA Astrophysics Data System (ADS)
Liang, B.; Chen, X.
2016-12-01
Using Global Land Surface Satellite (GLASS) LAI data during 1982 to 2003, we analyzed spatial and temporal variations of vegetation phenology in the tropical seasonal rain forest of South America. Several methods were used to fit seasonal LAI curves and extract start (SOS) and end (EOS) of the growing season. The results show that Fourier function can most effectively fit LAI curves, and yearly RMSEs for differences between observed and fitted LAI values are less than 0.01. The SOS ranged from 250 to 350 days of year, and occurred earlier in west than in east. Contrarily, the EOS were between 120 and 180 days of year, and appeared earlier in east than in west. Thus, the growing season was longer in west than in east. With regard to linear trends, SOS shows a significant advancement at 7% of pixels and a significant delay at 13% of pixels, whereas EOS advanced significantly at 16% of pixels and was delayed significantly at 18% of pixels. Preseason precipitation is the main influence factor of SOS and EOS in the tropical seasonal rain forest of South America.
Gambarota, Giulio; Hitti, Eric; Leporq, Benjamin; Saint-Jalmes, Hervé; Beuf, Olivier
2017-01-01
Tissue perfusion measurements using intravoxel incoherent motion (IVIM) diffusion-MRI are of interest for investigations of liver pathologies. A confounding factor in the perfusion quantification is the partial volume between liver tissue and large blood vessels. The aim of this study was to assess and correct for this partial volume effect in the estimation of the perfusion fraction. MRI experiments were performed at 3 Tesla with a diffusion-MRI sequence at 12 b-values. Diffusion signal decays in liver were analyzed using the non-negative least square (NNLS) method and the biexponential fitting approach. In some voxels, the NNLS analysis yielded a very fast-decaying component that was assigned to partial volume with the blood flowing in large vessels. Partial volume correction was performed by biexponential curve fitting, where the first data point (b = 0 s/mm 2 ) was eliminated in voxels with a very fast-decaying component. Biexponential fitting with partial volume correction yielded parametric maps with perfusion fraction values smaller than biexponential fitting without partial volume correction. The results of the current study indicate that the NNLS analysis in combination with biexponential curve fitting allows to correct for partial volume effects originating from blood flow in IVIM perfusion fraction measurements. Magn Reson Med 77:310-317, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook
2007-03-01
To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.
Díaz Alonso, Fernando; González Ferradás, Enrique; Sánchez Pérez, Juan Francisco; Miñana Aznar, Agustín; Ruiz Gimeno, José; Martínez Alonso, Jesús
2006-09-21
A number of models have been proposed to calculate overpressure and impulse from accidental industrial explosions. When the blast is produced by ignition of a vapour cloud, the TNO Multi-Energy model is widely used. From the curves given by this model, data are fitted to obtain equations showing the relationship between overpressure, impulse and distance. These equations, referred herein as characteristic curves, can be fitted by means of power equations, which depend on explosion energy and charge strength. Characteristic curves allow the determination of overpressure and impulse at each distance.
Merlos Rodrigo, Miguel Angel; Molina-López, Jorge; Jimenez Jimenez, Ana Maria; Planells Del Pozo, Elena; Adam, Pavlina; Eckschlager, Tomas; Zitka, Ondrej; Richtera, Lukas; Adam, Vojtech
2017-01-01
The translation of metallothioneins (MTs) is one of the defense strategies by which organisms protect themselves from metal-induced toxicity. MTs belong to a family of proteins comprising MT-1, MT-2, MT-3, and MT-4 classes, with multiple isoforms within each class. The main aim of this study was to determine the behavior of MT in dependence on various externally modelled environments, using electrochemistry. In our study, the mass distribution of MTs was characterized using MALDI-TOF. After that, adsorptive transfer stripping technique with differential pulse voltammetry was selected for optimization of electrochemical detection of MTs with regard to accumulation time and pH effects. Our results show that utilization of 0.5 M NaCl, pH 6.4, as the supporting electrolyte provides a highly complicated fingerprint, showing a number of non-resolved voltammograms. Hence, we further resolved the voltammograms exhibiting the broad and overlapping signals using curve fitting. The separated signals were assigned to the electrochemical responses of several MT complexes with zinc(II), cadmium(II), and copper(II), respectively. Our results show that electrochemistry could serve as a great tool for metalloproteomic applications to determine the ratio of metal ion bonds within the target protein structure, however, it provides highly complicated signals, which require further resolution using a proper statistical method, such as curve fitting. PMID:28287470
Sasaki, Miho; Sumi, Misa; Eida, Sato; Katayama, Ikuo; Hotokezaka, Yuka; Nakamura, Takashi
2014-01-01
Intravoxel incoherent motion (IVIM) imaging can characterize diffusion and perfusion of normal and diseased tissues, and IVIM parameters are authentically determined by using cumbersome least-squares method. We evaluated a simple technique for the determination of IVIM parameters using geometric analysis of the multiexponential signal decay curve as an alternative to the least-squares method for the diagnosis of head and neck tumors. Pure diffusion coefficients (D), microvascular volume fraction (f), perfusion-related incoherent microcirculation (D*), and perfusion parameter that is heavily weighted towards extravascular space (P) were determined geometrically (Geo D, Geo f, and Geo P) or by least-squares method (Fit D, Fit f, and Fit D*) in normal structures and 105 head and neck tumors. The IVIM parameters were compared for their levels and diagnostic abilities between the 2 techniques. The IVIM parameters were not able to determine in 14 tumors with the least-squares method alone and in 4 tumors with the geometric and least-squares methods. The geometric IVIM values were significantly different (p<0.001) from Fit values (+2±4% and −7±24% for D and f values, respectively). Geo D and Fit D differentiated between lymphomas and SCCs with similar efficacy (78% and 80% accuracy, respectively). Stepwise approaches using combinations of Geo D and Geo P, Geo D and Geo f, or Fit D and Fit D* differentiated between pleomorphic adenomas, Warthin tumors, and malignant salivary gland tumors with the same efficacy (91% accuracy = 21/23). However, a stepwise differentiation using Fit D and Fit f was less effective (83% accuracy = 19/23). Considering cumbersome procedures with the least squares method compared with the geometric method, we concluded that the geometric determination of IVIM parameters can be an alternative to least-squares method in the diagnosis of head and neck tumors. PMID:25402436
Revision of the Phenomenological Characteristics of the Algol-Type Stars Using the Nav Algorithm
NASA Astrophysics Data System (ADS)
Tkachenko, M. G.; Andronov, I. L.; Chinarova, L. L.
Phenomenological characteristics of the sample of the Algol-type stars are revised using a recently developed NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv 1212.6707A) and compared to that obtained using common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree (1994OAP.....7...49A, 2003ASPC..292..391A). The computer program NAV is introduced, which allows to determine the best fit with 7 "linear" and 5 "nonlinear" parameters and their error estimates. The number of parameters is much smaller than for the TP fit (typically 20-40, depending on the width of the eclipse, and is much smaller (5-20) for the W UMa and β Lyrae-type stars. This causes more smooth approximation taking into account the reflection and ellipsoidal effects (TP2) and generally different shapes of the primary and secondary eclipses. An application of the method to two-color CCD photometry to the recently discovered eclipsing variable 2MASS J18024395 + 4003309 = VSX J180243.9 +400331 (2015JASS...32..101A) allowed to make estimates of the physical parameters of the binary system based on the phenomenological parameters of the light curve. The phenomenological parameters of the light curves were determined for the sample of newly discovered EA and EW-type stars (VSX J223429.3+552903, VSX J223421.4+553013, VSX J223416.2+553424, USNO-B1.0 1347-0483658, UCAC3-191-085589, VSX J180755.6+074711= UCAC3 196-166827). Despite we have used original observations published by the discoverers, the accuracy estimates of the period using the NAV method are typically better than the original ones.
NASA Astrophysics Data System (ADS)
Mavroidis, Panayiotis; Lind, Bengt K.; Theodorou, Kyriaki; Laurell, Göran; Fernberg, Jan-Olof; Lefkopoulos, Dimitrios; Kappas, Constantin; Brahme, Anders
2004-08-01
The purpose of this work is to provide some statistical methods for evaluating the predictive strength of radiobiological models and the validity of dose-response parameters for tumour control and normal tissue complications. This is accomplished by associating the expected complication rates, which are calculated using different models, with the clinical follow-up records. These methods are applied to 77 patients who received radiation treatment for head and neck cancer and 85 patients who were treated for arteriovenous malformation (AVM). The three-dimensional dose distribution delivered to esophagus and AVM nidus and the clinical follow-up results were available for each patient. Dose-response parameters derived by a maximum likelihood fitting were used as a reference to evaluate their compatibility with the examined treatment methodologies. The impact of the parameter uncertainties on the dose-response curves is demonstrated. The clinical utilization of the radiobiological parameters is illustrated. The radiobiological models (relative seriality and linear Poisson) and the reference parameters are validated to prove their suitability in reproducing the treatment outcome pattern of the patient material studied (through the probability of finding a worse fit, area under the ROC curve and khgr2 test). The analysis was carried out for the upper 5 cm of the esophagus (proximal esophagus) where all the strictures are formed, and the total volume of AVM. The estimated confidence intervals of the dose-response curves appear to have a significant supporting role on their clinical implementation and use.
Promoting convergence: The Phi spiral in abduction of mouse corneal behaviors
Rhee, Jerry; Nejad, Talisa Mohammad; Comets, Olivier; Flannery, Sean; Gulsoy, Eine Begum; Iannaccone, Philip; Foster, Craig
2015-01-01
Why do mouse corneal epithelial cells display spiraling patterns? We want to provide an explanation for this curious phenomenon by applying an idealized problem solving process. Specifically, we applied complementary line-fitting methods to measure transgenic epithelial reporter expression arrangements displayed on three mature, live enucleated globes to clarify the problem. Two prominent logarithmic curves were discovered, one of which displayed the ϕ ratio, an indicator of an optimal configuration in phyllotactic systems. We then utilized two different computational approaches to expose our current understanding of the behavior. In one procedure, which involved an isotropic mechanics-based finite element method, we successfully produced logarithmic spiral curves of maximum shear strain based pathlines but computed dimensions displayed pitch angles of 35° (ϕ spiral is ∼17°), which was altered when we fitted the model with published measurements of coarse collagen orientations. We then used model-based reasoning in context of Peircean abduction to select a working hypothesis. Our work serves as a concise example of applying a scientific habit of mind and illustrates nuances of executing a common method to doing integrative science. © 2014 Wiley Periodicals, Inc. Complexity 20: 22–38, 2015 PMID:25755620
Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping
2003-05-01
In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
NASA Astrophysics Data System (ADS)
Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya
2018-03-01
A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.
Four New Binary Stars in the Field of CL Aurigae. II
NASA Astrophysics Data System (ADS)
Kim, Chun-Hwey; Lee, Jae Woo; Duck, Hyun Kim; Andronov, Ivan L.
2010-12-01
We report on a discovery of four new variable stars (USNO-B1.0 1234-0103195, 1235- 0097170, 1236-0100293 and 1236-0100092) in the field of CL Aur. The stars are classified as eclipsing binary stars with orbital periods of 0.5137413(23) (EW type), 0.8698365(26) (EA) and 4.0055842(40) (EA with a significant orbital eccentricity), respectively. The fourth star (USNO-B1.0 1236-0100092) showed only one partial ascending branch of the light curves, although 22 nights were covered at the 61-cm telescope at the Sobaeksan Optical Astronomy Observatory (SOAO) in Korea. Fourteen minima timings for these stars are published separately. In an addition to the original discovery paper (Kim et al. 2010), we discuss methodological problems and present results of mathematical modeling of the light curves using other methods, i.e. trigonometric polynomial fits and the newly developed fit "NAV" ("New Algol Variable").
Emissivity independent optical pyrometer
Earl, Dennis Duncan; Kisner, Roger A.
2017-04-04
Disclosed herein are representative embodiments of methods, apparatus, and systems for determining the temperature of an object using an optical pyrometer. Certain embodiments of the disclosed technology allow for making optical temperature measurements that are independent of the surface emissivity of the object being sensed. In one of the exemplary embodiments disclosed herein, a plurality of spectral radiance measurements at a plurality of wavelengths is received from a surface of an object being measured. The plurality of the spectral radiance measurements is fit to a scaled version of a black body curve, the fitting comprising determining a temperature of the scaled version of the black body curve. The temperature is then output. The present disclosure is not to be construed as limiting and is instead directed toward all novel and nonobvious features and aspects of the various disclosed embodiments, alone or in various combinations and subcombinations with one another.
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Dawiec, A.
2017-12-01
A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.
76 FR 9696 - Equipment Price Forecasting in Energy Conservation Standards Analysis
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-22
... for particular efficiency design options, an empirical experience curve fit to the available data may be used to forecast future costs of such design option technologies. If a statistical evaluation indicates a low level of confidence in estimates of the design option cost trend, this method should not be...
Self-referencing Taper Curves for Loblolly Pine
Mike Strub; Chris Cieszewski; David Hyink
2005-01-01
We compare the traditional fitting of relative diameter over relative height with methods based on self-referencing functions and stochastic parameter estimation using data collected by the Virginia Polytechnic Institute and State University Growth and Yield Cooperative. Two sets of self-referencing equations assume known diameter at 4.5 feet inside (dib) and outside (...
ERIC Educational Resources Information Center
Krausert, Christopher R.; Ying, Di; Zhang, Yu; Jiang, Jack J.
2011-01-01
Purpose: Digital kymography and vocal fold curve fitting are blended with detailed symmetry analysis of kymograms to provide a comprehensive characterization of the vibratory properties of injured vocal folds. Method: Vocal fold vibration of 12 excised canine larynges was recorded under uninjured, unilaterally injured, and bilaterally injured…
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
JMOSFET: A MOSFET parameter extractor with geometry-dependent terms
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Moore, B. T.
1985-01-01
The parameters from metal-oxide-silicon field-effect transistors (MOSFETs) that are included on the Combined Release and Radiation Effects Satellite (CRRES) test chips need to be extracted to have a simple but comprehensive method that can be used in wafer acceptance, and to have a method that is sufficiently accurate that it can be used in integrated circuits. A set of MOSFET parameter extraction procedures that are directly linked to the MOSFET model equations and that facilitate the use of simple, direct curve-fitting techniques are developed. In addition, the major physical effects that affect MOSFET operation in the linear and saturation regions of operation for devices fabricated in 1.2 to 3.0 mm CMOS technology are included. The fitting procedures were designed to establish single values for such parameters as threshold voltage and transconductance and to provide for slope matching between the linear and saturation regions of the MOSFET output current-voltage curves. Four different sizes of transistors that cover a rectangular-shaped region of the channel length-width plane are analyzed.
A smoothing algorithm using cubic spline functions
NASA Technical Reports Server (NTRS)
Smith, R. E., Jr.; Price, J. M.; Howser, L. M.
1974-01-01
Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.
NASA Astrophysics Data System (ADS)
Hajdu, Gergely; Dékány, István; Catelan, Márcio; Grebel, Eva K.; Jurcsik, Johanna
2018-04-01
RR Lyrae variables are widely used tracers of Galactic halo structure and kinematics, but they can also serve to constrain the distribution of the old stellar population in the Galactic bulge. With the aim of improving their near-infrared photometric characterization, we investigate their near-infrared light curves, as well as the empirical relationships between their light curve and metallicities using machine learning methods. We introduce a new, robust method for the estimation of the light-curve shapes, hence the average magnitudes of RR Lyrae variables in the K S band, by utilizing the first few principal components (PCs) as basis vectors, obtained from the PC analysis of a training set of light curves. Furthermore, we use the amplitudes of these PCs to predict the light-curve shape of each star in the J-band, allowing us to precisely determine their average magnitudes (hence colors), even in cases where only one J measurement is available. Finally, we demonstrate that the K S-band light-curve parameters of RR Lyrae variables, together with the period, allow the estimation of the metallicity of individual stars with an accuracy of ∼0.2–0.25 dex, providing valuable chemical information about old stellar populations bearing RR Lyrae variables. The methods presented here can be straightforwardly adopted for other classes of variable stars, bands, or for the estimation of other physical quantities.
Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor
NASA Astrophysics Data System (ADS)
Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei
2013-08-01
Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.
Au-Ge MEAM potential fitted to the binary phase diagram
NASA Astrophysics Data System (ADS)
Wang, Yanming; Santana, Adriano; Cai, Wei
2017-02-01
We have developed a modified embedded atom method potential for the gold-germanium (Au-Ge) binary system that is fitted to the experimental binary phase diagram. The phase diagram is obtained from the common tangent construction of the free energy curves calculated by the adiabatic switching method. While maintaining the accuracy of the melting points of pure Au and Ge, this potential reproduces the eutectic temperature, eutectic composition and the solubility of Ge in solid Au, all in good agreement with the experimental values. To demonstrate the self-consistency of the potential, we performed benchmark molecular dynamics simulations of Ge crystal growth and etching in contact with a Au-Ge liquid alloy.
Bancalari, Elena; Bernini, Valentina; Bottari, Benedetta; Neviani, Erasmo; Gatti, Monica
2016-01-01
Impedance microbiology is a method that enables tracing microbial growth by measuring the change in the electrical conductivity. Different systems, able to perform this measurement, are available in commerce and are commonly used for food control analysis by mean of measuring a point of the impedance curve, defined "time of detection." With this work we wanted to find an objective way to interpret the metabolic significance of impedance curves and propose it as a valid approach to evaluate the potential acidifying performances of starter lactic acid bacteria to be employed in milk transformation. To do this it was firstly investigated the possibility to use the Gompertz equation to describe the data coming from the impedance curve obtained by mean of BacTrac 4300®. Lag time (λ), maximum specific M% rate (μmax), and maximum value of M% (Yend) have been calculated and, given the similarity of the impedance fitted curve to the bacterial growth curve, their meaning has been interpreted. Potential acidifying performances of eighty strains belonging to Lactobacillus helveticus, Lactobacillus delbrueckii subsp. bulgaricus, Lactococcus lactis , and Streptococcus thermophilus species have been evaluated by using the kinetics parameters, obtained from Excel add-in DMFit version 2.1. The novelty and importance of our findings, obtained by means of BacTrac 4300®, is that they can also be applied to data obtained from other devices. Moreover, the meaning of λ, μmax, and Yend that we have extrapolated from Modified Gompertz equation and discussed for lactic acid bacteria in milk, can be exploited also to other food environment or other bacteria, assuming that they can give a curve and that curve is properly fitted with Gompertz equation.
Bancalari, Elena; Bernini, Valentina; Bottari, Benedetta; Neviani, Erasmo; Gatti, Monica
2016-01-01
Impedance microbiology is a method that enables tracing microbial growth by measuring the change in the electrical conductivity. Different systems, able to perform this measurement, are available in commerce and are commonly used for food control analysis by mean of measuring a point of the impedance curve, defined “time of detection.” With this work we wanted to find an objective way to interpret the metabolic significance of impedance curves and propose it as a valid approach to evaluate the potential acidifying performances of starter lactic acid bacteria to be employed in milk transformation. To do this it was firstly investigated the possibility to use the Gompertz equation to describe the data coming from the impedance curve obtained by mean of BacTrac 4300®. Lag time (λ), maximum specific M% rate (μmax), and maximum value of M% (Yend) have been calculated and, given the similarity of the impedance fitted curve to the bacterial growth curve, their meaning has been interpreted. Potential acidifying performances of eighty strains belonging to Lactobacillus helveticus, Lactobacillus delbrueckii subsp. bulgaricus, Lactococcus lactis, and Streptococcus thermophilus species have been evaluated by using the kinetics parameters, obtained from Excel add-in DMFit version 2.1. The novelty and importance of our findings, obtained by means of BacTrac 4300®, is that they can also be applied to data obtained from other devices. Moreover, the meaning of λ, μmax, and Yend that we have extrapolated from Modified Gompertz equation and discussed for lactic acid bacteria in milk, can be exploited also to other food environment or other bacteria, assuming that they can give a curve and that curve is properly fitted with Gompertz equation. PMID:27799925
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delice, S., E-mail: sdelice@metu.edu.tr; Isik, M.; Gasanly, N.M.
2015-10-15
Highlights: • Optical and thermoluminescence properties of Ga{sub 4}S{sub 3}Se crystals were investigated. • Indirect and direct band gap energies were found as 2.39 and 2.53 eV, respectively. • The activation energy of the trap center was determined as 495 meV. - Abstract: Optical and thermoluminescence properties on GaS{sub 0.75}Se{sub 0.25} crystals were investigated in the present work. Transmission and reflection measurements were performed at room temperature in the wavelength range of 400–1000 nm. Analysis revealed the presence of indirect and direct transitions with band gap energies of 2.39 and 2.53 eV, respectively. TL spectra obtained at low temperatures (10–300more » K) exhibited one peak having maximum temperature of 168 K. Observed peak was analyzed using curve fitting, initial rise and peak shape methods to calculate the activation energy of the associated trap center. All applied methods were consistent with the value of 495 meV. Attempt-to-escape-frequency and capture cross section of the trap center were determined using the results of curve fitting. Heating rate dependence studies of the glow curve in the range of 0.4–0.8 K/s resulted with decrease of TL intensity and shift of the peak maximum temperature to higher values.« less
NASA Astrophysics Data System (ADS)
Kurian, Jessyamma; Mathew, M. Jacob
2018-04-01
In this paper we report the structural, optical and magnetic studies of three spinel ferrites namely CuFe2O4, MgFe2O4 and ZnFe2O4 prepared in an autoclave under the same physical conditions but with two different liquid medium and different surfactant. We use water as the medium and trisodium citrate as the surfactant for one method (Hydrothermal method) and ethylene glycol as the medium and poly ethylene glycol as the surfactant for the second method (solvothermal method). The phase identification and structural characterization are done using XRD and morphological studies are carried out by TEM. Cubical and porous spherical morphologies are obtained for hydrothermal and solvothermal process respectively without any impurity phase. The optical studies are carried out using FTIR and UV-Vis reflectance spectra. In order to elucidate the nonlinear optical behaviour of the prepared nanomaterial, open aperture z-scan technique is used. From the fitted z-scan curves nonlinear absorption coefficient and the saturation intensity are determined. The magnetic characterization of the samples is performed at room temperature using vibrating sample magnetometer measurements. The M-H curves obtained are fitted using theoretical equation and the different components of magnetization are determined. Nanoparticles with high saturation magnetization are obtained for MgFe2O4 and ZnFe2O4 prepared under solvothermal reaction. The magnetic hyperfine parameters and the cation distribution of the prepared materials are determined using room temperature Mössbauer spectroscopy. The fitted spectra reveal the difference in the magnetic hyperfine parameters owing to the change in size and morphology.
NASA Astrophysics Data System (ADS)
Li, Xin; Tang, Li; Lin, Hai-Nan
2017-05-01
We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)
NASA Astrophysics Data System (ADS)
Nohtomi, Akihiro; Wakabayashi, Genichiro
2015-11-01
We evaluated the accuracy of a self-activation method with iodine-containing scintillators in quantifying 128I generation in an activation detector; the self-activation method was recently proposed for photo-neutron on-line measurements around X-ray radiotherapy machines. Here, we consider the accuracy of determining the initial count rate R0, observed just after termination of neutron irradiation of the activation detector. The value R0 is directly related to the amount of activity generated by incident neutrons; the detection efficiency of radiation emitted from the activity should be taken into account for such an evaluation. Decay curves of 128I activity were numerically simulated by a computer program for various conditions including different initial count rates (R0) and background rates (RB), as well as counting statistical fluctuations. The data points sampled at minute intervals and integrated over the same period were fit by a non-linear least-squares fitting routine to obtain the value R0 as a fitting parameter with an associated uncertainty. The corresponding background rate RB was simultaneously calculated in the same fitting routine. Identical data sets were also evaluated by a well-known integration algorithm used for conventional activation methods and the results were compared with those of the proposed fitting method. When we fixed RB = 500 cpm, the relative uncertainty σR0 /R0 ≤ 0.02 was achieved for R0/RB ≥ 20 with 20 data points from 1 min to 20 min following the termination of neutron irradiation used in the fitting; σR0 /R0 ≤ 0.01 was achieved for R0/RB ≥ 50 with the same data points. Reasonable relative uncertainties to evaluate initial count rates were reached by the decay-fitting method using practically realistic sampling numbers. These results clarified the theoretical limits of the fitting method. The integration method was found to be potentially vulnerable to short-term variations in background levels, especially instantaneous contaminations by spike-like noise. The fitting method easily detects and removes such spike-like noise.
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam
2014-01-01
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
A Quadriparametric Model to Describe the Diversity of Waves Applied to Hormonal Data.
Abdullah, Saman; Bouchard, Thomas; Klich, Amna; Leiva, Rene; Pyper, Cecilia; Genolini, Christophe; Subtil, Fabien; Iwaz, Jean; Ecochard, René
2018-05-01
Even in normally cycling women, hormone level shapes may widely vary between cycles and between women. Over decades, finding ways to characterize and compare cycle hormone waves was difficult and most solutions, in particular polynomials or splines, do not correspond to physiologically meaningful parameters. We present an original concept to characterize most hormone waves with only two parameters. The modelling attempt considered pregnanediol-3-alpha-glucuronide (PDG) and luteinising hormone (LH) levels in 266 cycles (with ultrasound-identified ovulation day) in 99 normally fertile women aged 18 to 45. The study searched for a convenient wave description process and carried out an extended search for the best fitting density distribution. The highly flexible beta-binomial distribution offered the best fit of most hormone waves and required only two readily available and understandable wave parameters: location and scale. In bell-shaped waves (e.g., PDG curves), early peaks may be fitted with a low location parameter and a low scale parameter; plateau shapes are obtained with higher scale parameters. I-shaped, J-shaped, and U-shaped waves (sometimes the shapes of LH curves) may be fitted with high scale parameter and, respectively, low, high, and medium location parameter. These location and scale parameters will be later correlated with feminine physiological events. Our results demonstrate that, with unimodal waves, complex methods (e.g., functional mixed effects models using smoothing splines, second-order growth mixture models, or functional principal-component- based methods) may be avoided. The use, application, and, especially, result interpretation of four-parameter analyses might be advantageous within the context of feminine physiological events. Schattauer GmbH.
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
Hierarchical statistical modeling of xylem vulnerability to cavitation.
Ogle, Kiona; Barber, Jarrett J; Willson, Cynthia; Thompson, Brenda
2009-01-01
Cavitation of xylem elements diminishes the water transport capacity of plants, and quantifying xylem vulnerability to cavitation is important to understanding plant function. Current approaches to analyzing hydraulic conductivity (K) data to infer vulnerability to cavitation suffer from problems such as the use of potentially unrealistic vulnerability curves, difficulty interpreting parameters in these curves, a statistical framework that ignores sampling design, and an overly simplistic view of uncertainty. This study illustrates how two common curves (exponential-sigmoid and Weibull) can be reparameterized in terms of meaningful parameters: maximum conductivity (k(sat)), water potential (-P) at which percentage loss of conductivity (PLC) =X% (P(X)), and the slope of the PLC curve at P(X) (S(X)), a 'sensitivity' index. We provide a hierarchical Bayesian method for fitting the reparameterized curves to K(H) data. We illustrate the method using data for roots and stems of two populations of Juniperus scopulorum and test for differences in k(sat), P(X), and S(X) between different groups. Two important results emerge from this study. First, the Weibull model is preferred because it produces biologically realistic estimates of PLC near P = 0 MPa. Second, stochastic embolisms contribute an important source of uncertainty that should be included in such analyses.
NASA Astrophysics Data System (ADS)
Lai, Chia-Lin; Lee, Jhih-Shian; Chen, Jyh-Cheng
2015-02-01
Energy-mapping, the conversion of linear attenuation coefficients (μ) calculated at the effective computed tomography (CT) energy to those corresponding to 511 keV, is an important step in CT-based attenuation correction (CTAC) for positron emission tomography (PET) quantification. The aim of this study was to implement energy-mapping step by using curve fitting ability of artificial neural network (ANN). Eleven digital phantoms simulated by Geant4 application for tomographic emission (GATE) and 12 physical phantoms composed of various volume concentrations of iodine contrast were used in this study to generate energy-mapping curves by acquiring average CT values and linear attenuation coefficients at 511 keV of these phantoms. The curves were built with ANN toolbox in MATLAB. To evaluate the effectiveness of the proposed method, another two digital phantoms (liver and spine-bone) and three physical phantoms (volume concentrations of 3%, 10% and 20%) were used to compare the energy-mapping curves built by ANN and bilinear transformation, and a semi-quantitative analysis was proceeded by injecting 0.5 mCi FDG into a SD rat for micro-PET scanning. The results showed that the percentage relative difference (PRD) values of digital liver and spine-bone phantom are 5.46% and 1.28% based on ANN, and 19.21% and 1.87% based on bilinear transformation. For 3%, 10% and 20% physical phantoms, the PRD values of ANN curve are 0.91%, 0.70% and 3.70%, and the PRD values of bilinear transformation are 3.80%, 1.44% and 4.30%, respectively. Both digital and physical phantoms indicated that the ANN curve can achieve better performance than bilinear transformation. The semi-quantitative analysis of rat PET images showed that the ANN curve can reduce the inaccuracy caused by attenuation effect from 13.75% to 4.43% in brain tissue, and 23.26% to 9.41% in heart tissue. On the other hand, the inaccuracy remained 6.47% and 11.51% in brain and heart tissue when the bilinear transformation was used. Overall, it can be concluded that the bilinear transformation method resulted in considerable bias and the newly proposed calibration curve built by ANN could achieve better results with acceptable accuracy.
Fenton, Tanis R; Anderson, Diane; Groh-Wargo, Sharon; Hoyos, Angela; Ehrenkranz, Richard A; Senterre, Thibault
2018-05-01
To examine how well growth velocity recommendations for preterm infants fit with current growth references: Fenton 2013, Olsen 2010, INTERGROWTH 2015, and the World Health Organization Growth Standard 2006. The Average (2-point), Exponential (2-point), Early (1-point) method weight-gains were calculated for 1,4,8,12, and 16-week time-periods. Growth references' weekly velocities (g/kg/d, gram/day and cm/week) were illustrated graphically with frequently-quoted 15 g/kg/d, 10-30 grams/day and 1 cm/week rates superimposed. The 15 g/kg/d and 1 cm/week growth velocity rates were calculated from 24-50 weeks, superimposed on the Fenton and Olsen preterm growth charts. The Average and Exponential g/kg/d estimates showed close agreement for all ages (range 5.0-18.9 g/kg/d), while the Early method yielded values as high as 41 g/kg/d. All 3 preterm growth references were similar to 15 g/kg/d rate at 34 weeks, but rates were higher prior and lower at older ages. For gram/day, the growth references changed from 10 to 30 grams/day for 24-33 weeks. Head growth rates generally fit the 1 cm/week velocity for 23-30 weeks, and length growth rates fit for 37-40 weeks. The calculated g/kg/d curves deviated from the growth charts, first downward, then steeply crossed the median curves near term. Human growth is not constant through gestation and early infancy. The frequently-quoted 15 g/kg/d, 10-30 gram/day and 1 cm/week only fit current growth references for limited time periods. Rates of 15-20 g/kg/d (calculated using average or exponential methods) are a reasonable goal for infants 23-36 weeks, but not beyond. Copyright © 2017 Elsevier Inc. All rights reserved.
Kobayashi, Tohru; Fuse, Shigeto; Sakamoto, Naoko; Mikami, Masashi; Ogawa, Shunichi; Hamaoka, Kenji; Arakaki, Yoshio; Nakamura, Tsuneyuki; Nagasawa, Hiroyuki; Kato, Taichi; Jibiki, Toshiaki; Iwashima, Satoru; Yamakawa, Masaru; Ohkubo, Takashi; Shimoyama, Shinya; Aso, Kentaro; Sato, Seiichi; Saji, Tsutomu
2016-08-01
Several coronary artery Z score models have been developed. However, a Z score model derived by the lambda-mu-sigma (LMS) method has not been established. Echocardiographic measurements of the proximal right coronary artery, left main coronary artery, proximal left anterior descending coronary artery, and proximal left circumflex artery were prospectively collected in 3,851 healthy children ≤18 years of age and divided into developmental and validation data sets. In the developmental data set, smooth curves were fitted for each coronary artery using linear, logarithmic, square-root, and LMS methods for both sexes. The relative goodness of fit of these models was compared using the Bayesian information criterion. The best-fitting model was tested for reproducibility using the validation data set. The goodness of fit of the selected model was visually compared with that of the previously reported regression models using a Q-Q plot. Because the internal diameter of each coronary artery was not similar between sexes, sex-specific Z score models were developed. The LMS model with body surface area as the independent variable showed the best goodness of fit; therefore, the internal diameter of each coronary artery was transformed into a sex-specific Z score on the basis of body surface area using the LMS method. In the validation data set, a Q-Q plot of each model indicated that the distribution of Z scores in the LMS models was closer to the normal distribution compared with previously reported regression models. Finally, the final models for each coronary artery in both sexes were developed using the developmental and validation data sets. A Microsoft Excel-based Z score calculator was also created, which is freely available online (http://raise.umin.jp/zsp/calculator/). Novel LMS models with which to estimate the sex-specific Z score of each internal coronary artery diameter were generated and validated using a large pediatric population. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaczmarski, Krzysztof; Guiochon, Georges A
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventionalmore » procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.« less
System and process for ultrasonic characterization of deformed structures
Panetta, Paul D [Williamsburg, VA; Morra, Marino [Richland, WA; Johnson, Kenneth I [Richland, WA
2011-11-22
Generally speaking, the method of the present invention is performed by making various ultrasonic scans at preselected orientations along the length of a material being tested. Data from the scans are then plotted together with various calculated parameters that are calculated from this data. Lines or curves are then fitted to the respective plotted points. Review of these plotted curves allows the location and severity of defects within these sections to be determined and quantified. With this information various other decisions related to how, when or whether repair or replacement of a particular portion of a structure can be made.
1978-12-01
multinational corporation in the 1960’s placed extreme emphasis on the need for effective and efficient noise suppression devices. Phase I of work...through model and engine testing applicable to an afterburning turbojet engine. Suppressor designs were based primarily on empirical methods. Phase II...using "ray" acoustics. This method is in contrast to the purely empirical method which consists of the curve -fitting of normalized data. In order to
A Bayesian Method for Evaluating Passing Scores: The PPoP Curve
ERIC Educational Resources Information Center
Wainer, Howard; Wang, X. A.; Skorupski, William P.; Bradlow, Eric T.
2005-01-01
In this note, we demonstrate an interesting use of the posterior distributions (and corresponding posterior samples of proficiency) that are yielded by fitting a fully Bayesian test scoring model to a complex assessment. Specifically, we examine the efficacy of the test in combination with the specific passing score that was chosen through expert…
QUANTITATION OF MENSTRUAL BLOOD LOSS: A RADIOACTIVE METHOD UTILIZING A COUNTING DOME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tauxe, W.N.
A description has been given of a simple, accurate tech nique for the quantitation of menstrual blood loss, involving the determination of a three- dimensional isosensitivity curve and the fashioning of a lucite dome with cover to fit these specifications. Ten normal subjects lost no more than 50 ml each per menstrual period. (auth)
Numerical integration of discontinuous functions: moment fitting and smart octree
NASA Astrophysics Data System (ADS)
Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander
2017-11-01
A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.
New approach to the calculation of pistachio powder hysteresis
NASA Astrophysics Data System (ADS)
Tavakolipour, Hamid; Mokhtarian, Mohsen
2016-04-01
Moisture sorption isotherms for pistachio powder were determined by gravimetric method at temperatures of 15, 25, 35 and 40°C. A selected mathematical models were tested to determine the best suitable model to predict isotherm curve. The results show that Caurie model had the most satisfactory goodness of fit. Also, another purpose of this research was to introduce a new methodology to determine the amount of hysteresis at different temperatures by using best predictive model of isotherm curve based on definite integration method. The results demonstrated that maximum hysteresis is related to the multi-layer water (in the range of water activity 0.2-0.6) which corresponds to the capillary condensation region and this phenomenon decreases with increasing temperature.
On high b diffusion imaging in the human brain: ruminations and experimental insights.
Mulkern, Robert V; Haker, Steven J; Maier, Stephan E
2009-10-01
Interest in the manner in which brain tissue signal decays with b factor in diffusion imaging schemes has grown in recent years following the observation that the decay curves depart from purely monoexponential decay behavior. Regardless of the model or fitting function proposed for characterizing sufficiently sampled decay curves (vide infra), the departure from monoexponentiality spells increased tissue characterization potential. The degree to which this potential can be harnessed to improve specificity, sensitivity and spatial localization of diseases in brain, and other tissues, largely remains to be explored. Furthermore, the degree to which currently popular diffusion tensor imaging methods, including visually impressive white matter fiber "tractography" results, have almost completely ignored the nonmonoexponential nature of the basic signal decay with b factor is worthy of communal introspection. Here we limit our attention to a review of the basic experimental features associated with brain water signal diffusion decay curves as measured over extended b-factor ranges, the simple few parameter fitting functions that have been proposed to characterize these decays and the more involved models, e.g.,"ruminations," which have been proposed to account for the nonmonoexponentiality to date.
On high b diffusion imaging in the human brain: ruminations and experimental insights✩
Mulkern, Robert V.; Haker, Steven J.; Maier, Stephan E.
2010-01-01
Interest in the manner in which brain tissue signal decays with b factor in diffusion imaging schemes has grown in recent years following the observation that the decay curves depart from purely monoexponential decay behavior. Regardless of the model or fitting function proposed for characterizing sufficiently sampled decay curves (vide infra), the departure from monoexponentiality spells increased tissue characterization potential. The degree to which this potential can be harnessed to improve specificity, sensitivity and spatial localization of diseases in brain, and other tissues, largely remains to be explored. Furthermore, the degree to which currently popular diffusion tensor imaging methods, including visually impressive white matter fiber “tractography” results, have almost completely ignored the nonmonoexponential nature of the basic signal decay with b factor is worthy of communal introspection. Here we limit our attention to a review of the basic experimental features associated with brain water signal diffusion decay curves as measured over extended b-factor ranges, the simple few parameter fitting functions that have been proposed to characterize these decays and the more involved models, e.g.,“ruminations,” which have been proposed to account for the nonmonoexponentiality to date. PMID:19520535
Hybrid Micro-Electro-Mechanical Tunable Filter
2007-09-01
Figure 2.10), one can see the developers have used surface micromachining techniques to build the micromirror structure over the CMOS addressing...DBRs, microcavity composition, initial air gap, contact layers, substrate Dispersion Data Curve -fit dispersion data or generate dispersion function...measurements • Curve -fit the dispersion data or generate a continuous, wavelength-dependent, representation of material dispersion • Manually design the
Consideration of Wear Rates at High Velocities
2010-03-01
Strain vs. Three-dimensional Model . . . . . . . . . . . . 57 3.11 Example Single Asperity Wear Rate Integral . . . . . . . . . . 58 4.1 Third Stage...Slipper Accumulated Frictional Heating . . . . . . 67 4.2 Surface Temperature Third Stage Slipper, ave=0.5 . . . . . . . 67 4.3 Melt Depth Example...64 A3S Coefficient for Frictional Heat Curve Fit, Third Stage Slipper 66 B3S Coefficient for Frictional Heat Curve Fit, Third
NASA Astrophysics Data System (ADS)
He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.
2018-04-01
With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.
AN ANALYSIS OF THE SHAPES OF INTERSTELLAR EXTINCTION CURVES. VI. THE NEAR-IR EXTINCTION LAW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzpatrick, E. L.; Massa, D.
We combine new observations from the Hubble Space Telescope's Advanced Camera of Survey with existing data to investigate the wavelength dependence of near-IR (NIR) extinction. Previous studies suggest a power law form for NIR extinction, with a 'universal' value of the exponent, although some recent observations indicate that significant sight line-to-sight line variability may exist. We show that a power-law model for the NIR extinction provides an excellent fit to most extinction curves, but that the value of the power, {beta}, varies significantly from sight line to sight line. Therefore, it seems that a 'universal NIR extinction law' is notmore » possible. Instead, we find that as {beta} decreases, R(V) {identical_to} A(V)/E(B - V) tends to increase, suggesting that NIR extinction curves which have been considered 'peculiar' may, in fact, be typical for different R(V) values. We show that the power-law parameters can depend on the wavelength interval used to derive them, with the {beta} increasing as longer wavelengths are included. This result implies that extrapolating power-law fits to determine R(V) is unreliable. To avoid this problem, we adopt a different functional form for NIR extinction. This new form mimics a power law whose exponent increases with wavelength, has only two free parameters, can fit all of our curves over a longer wavelength baseline and to higher precision, and produces R(V) values which are consistent with independent estimates and commonly used methods for estimating R(V). Furthermore, unlike the power-law model, it gives R(V)s that are independent of the wavelength interval used to derive them. It also suggests that the relation R(V) = -1.36 E(K-V)/(E(B-V)) - 0.79 can estimate R(V) to {+-}0.12. Finally, we use model extinction curves to show that our extinction curves are in accord with theoretical expectations, and demonstrate how large samples of observational quantities can provide useful constraints on the grain properties.« less
Global geometric torsion estimation in adolescent idiopathic scoliosis.
Kadoury, Samuel; Shen, Jesse; Parent, Stefan
2014-04-01
Several attempts have been made to measure geometrical torsion in adolescent idiopathic scoliosis (AIS) and quantify the three-dimensional (3D) deformation of the spine. However, these approaches are sensitive to imprecisions in the 3D modeling of the anatomy and can only capture the effect locally at the vertebrae, ignoring the global effect at the regional level and thus have never been widely used to follow the progression of a deformity. The goal of this work was to evaluate the relevance of a novel geometric torsion descriptor based on a parametric modeling of the spinal curve as a 3D index of scoliosis. First, an image-based approach anchored on prior statistical distributions is used to reconstruct the spine in 3D from biplanar X-rays. Geometric torsion measuring the twisting effect of the spine is then estimated using a technique that approximates local arc-lengths with parametric curve fitting centered at the neutral vertebra in different spinal regions. We first evaluated the method with simulated experiments, demonstrating the method's robustness toward added noise and reconstruction inaccuracies. A pilot study involving 65 scoliotic patients exhibiting different types of deformities was also conducted. Results show the method is able to discriminate between different types of deformation based on this novel 3D index evaluated in the main thoracic and thoracolumbar/lumbar regions. This demonstrates that geometric torsion modeled by parametric spinal curve fitting is a robust tool that can be used to quantify the 3D deformation of AIS and possibly exploited as an index to classify the 3D shape.
Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P
2009-12-01
For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0.880, 0.994, and 0.856, respectively. For MR(glc), these correlations were 0.963, 0.994, and 0.966, respectively. In response monitoring, these correlations were 0.947, 0.982, and 0.949, respectively. Additional scaling by 1 late arterial sample showed a significant improvement (P < 0.001). The fitted input function calibrated for AA and iDV performed similarly to IDIF. Performance improved significantly using 1 late arterial sample. The proposed model can be used when an IDIF is not available or when serial arterial sampling is not feasible.
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.
Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods
NASA Astrophysics Data System (ADS)
Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.
2018-06-01
We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.
Some applications of categorical data analysis to epidemiological studies.
Grizzle, J E; Koch, G G
1979-01-01
Several examples of categorized data from epidemiological studies are analyzed to illustrate that more informative analysis than tests of independence can be performed by fitting models. All of the analyses fit into a unified conceptual framework that can be performed by weighted least squares. The methods presented show how to calculate point estimate of parameters, asymptotic variances, and asymptotically valid chi 2 tests. The examples presented are analysis of relative risks estimated from several 2 x 2 tables, analysis of selected features of life tables, construction of synthetic life tables from cross-sectional studies, and analysis of dose-response curves. PMID:540590
Chromaticity of gravitational microlensing events
NASA Astrophysics Data System (ADS)
Han, Cheongho; Park, Seong-Hong; Jeong, Jang-Hae
2000-07-01
In this paper, we investigate the colour changes of gravitational microlensing events caused by the two different mechanisms of differential amplification for a limb-darkened extended source and blending. From this investigation, we find that the colour changes of limb-darkened extended source events (colour curves) have dramatically different characteristics depending on whether the lens transits the source star or not. We show that for a source transit event, the lens proper motion can be determined by simply measuring the turning time of the colour curve instead of fitting the overall colour or light curves. We also find that even for a very small fraction of blended light, the colour changes induced by blending are equivalent to those induced by limb darkening, causing serious distortion in the observed colour curve. Therefore, to obtain useful information about the lens and source star from the colour curve of an event, it will be essential to correct for blending. We discuss various methods of blending correction.
A method for evaluating models that use galaxy rotation curves to derive the density profiles
NASA Astrophysics Data System (ADS)
de Almeida, Álefe O. F.; Piattella, Oliver F.; Rodrigues, Davi C.
2016-11-01
There are some approaches, either based on General Relativity (GR) or modified gravity, that use galaxy rotation curves to derive the matter density of the corresponding galaxy, and this procedure would either indicate a partial or a complete elimination of dark matter in galaxies. Here we review these approaches, clarify the difficulties on this inverted procedure, present a method for evaluating them, and use it to test two specific approaches that are based on GR: the Cooperstock-Tieu (CT) and the Balasin-Grumiller (BG) approaches. Using this new method, we find that neither of the tested approaches can satisfactorily fit the observational data without dark matter. The CT approach results can be significantly improved if some dark matter is considered, while for the BG approach no usual dark matter halo can improve its results.
Jain, S C; Miller, J R
1976-04-01
A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.
2003-01-01
The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.
The interpretation of polycrystalline coherent inelastic neutron scattering from aluminium
Roach, Daniel L.; Ross, D. Keith; Gale, Julian D.; Taylor, Jon W.
2013-01-01
A new approach to the interpretation and analysis of coherent inelastic neutron scattering from polycrystals (poly-CINS) is presented. This article describes a simulation of the one-phonon coherent inelastic scattering from a lattice model of an arbitrary crystal system. The one-phonon component is characterized by sharp features, determined, for example, by boundaries of the (Q, ω) regions where one-phonon scattering is allowed. These features may be identified with the same features apparent in the measured total coherent inelastic cross section, the other components of which (multiphonon or multiple scattering) show no sharp features. The parameters of the model can then be relaxed to improve the fit between model and experiment. This method is of particular interest where no single crystals are available. To test the approach, the poly-CINS has been measured for polycrystalline aluminium using the MARI spectrometer (ISIS), because both lattice dynamical models and measured dispersion curves are available for this material. The models used include a simple Lennard-Jones model fitted to the elastic constants of this material plus a number of embedded atom method force fields. The agreement obtained suggests that the method demonstrated should be effective in developing models for other materials where single-crystal dispersion curves are not available. PMID:24282332
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Z; Reyhan, M; Huang, Q
Purpose: The calibration of the Hounsfield units (HU) to relative proton stopping powers (RSP) is a crucial component in assuring the accurate delivery of proton therapy dose distributions to patients. The purpose of this work is to assess the uncertainty of CT calibration considering the impact of CT slice thickness, position of the plug within the phantom and phantom sizes. Methods: Stoichiometric calibration method was employed to develop the CT calibration curve. Gammex 467 tissue characterization phantom was scanned in Tomotherapy Cheese phantom and Gammex 451 phantom by using a GE CT scanner. Each plug was individually inserted into themore » same position of inner and outer ring of phantoms at each time, respectively. 1.25 mm and 2.5 mm slice thickness were used. Other parameters were same. Results: HU of selected human tissues were calculated based on fitted coefficient (Kph, Kcoh and KKN), and RSP were calculated according to the Bethe-Bloch equation. The calibration curve was obtained by fitting cheese phantom data with 1.25 mm thickness. There is no significant difference if the slice thickness, phantom size, position of plug changed in soft tissue. For boney structure, RSP increases up to 1% if the phantom size and the position of plug changed but keep the slice thickness the same. However, if the slice thickness varied from the one in the calibration curve, 0.5%–3% deviation would be expected depending on the plug position. The Inner position shows the obvious deviation (averagely about 2.5%). Conclusion: RSP shows a clinical insignificant deviation in soft tissue region. Special attention may be required when using a different slice thickness from the calibration curve for boney structure. It is clinically practical to address 3% deviation due to different thickness in the definition of clinical margins.« less
Methods for Performing Survival Curve Quality-of-Life Assessments.
Sumner, Walton; Ding, Eric; Fischer, Irene D; Hagen, Michael D
2014-08-01
Many medical decisions involve an implied choice between alternative survival curves, typically with differing quality of life. Common preference assessment methods neglect this structure, creating some risk of distortions. Survival curve quality-of-life assessments (SQLA) were developed from Gompertz survival curves fitting the general population's survival. An algorithm was developed to generate relative discount rate-utility (DRU) functions from a standard survival curve and health state and an equally attractive alternative curve and state. A least means squared distance algorithm was developed to describe how nearly 3 or more DRU functions intersect. These techniques were implemented in a program called X-Trade and tested. SQLA scenarios can portray realistic treatment choices. A side effect scenario portrays one prototypical choice, to extend life while experiencing some loss, such as an amputation. A risky treatment scenario portrays procedures with an initial mortality risk. A time trade scenario mimics conventional time tradeoffs. Each SQLA scenario yields DRU functions with distinctive shapes, such as sigmoid curves or vertical lines. One SQLA can imply a discount rate or utility if the other value is known and both values are temporally stable. Two SQLA exercises imply a unique discount rate and utility if the inferred DRU functions intersect. Three or more SQLA results can quantify uncertainty or inconsistency in discount rate and utility estimates. Pilot studies suggested that many subjects could learn to interpret survival curves and do SQLA. SQLA confuse some people. Compared with SQLA, standard gambles quantify very low utilities more easily, and time tradeoffs are simpler for high utilities. When discount rates approach zero, time tradeoffs are as informative and easier to do than SQLA. SQLA may complement conventional utility assessment methods. © The Author(s) 2014.
Focusing of light through turbid media by curve fitting optimization
NASA Astrophysics Data System (ADS)
Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi
2016-12-01
The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.
Li, Wen; Zhao, Li-Zhong; Ma, Dong-Wang; Wang, De-Zheng; Shi, Lei; Wang, Hong-Lei; Dong, Mo; Zhang, Shu-Yi; Cao, Lei; Zhang, Wei-Hua; Zhang, Xi-Peng; Zhang, Qing-Huai; Yu, Lin; Qin, Hai; Wang, Xi-Mo; Chen, Sam Li-Sheng
2018-05-01
We aimed to predict colorectal cancer (CRC) based on the demographic features and clinical correlates of personal symptoms and signs from Tianjin community-based CRC screening data.A total of 891,199 residents who were aged 60 to 74 and were screened in 2012 were enrolled. The Lasso logistic regression model was used to identify the predictors for CRC. Predictive validity was assessed by the receiver operating characteristic (ROC) curve. Bootstrapping method was also performed to validate this prediction model.CRC was best predicted by a model that included age, sex, education level, occupations, diarrhea, constipation, colon mucosa and bleeding, gallbladder disease, a stressful life event, family history of CRC, and a positive fecal immunochemical test (FIT). The area under curve (AUC) for the questionnaire with a FIT was 84% (95% CI: 82%-86%), followed by 76% (95% CI: 74%-79%) for a FIT alone, and 73% (95% CI: 71%-76%) for the questionnaire alone. With 500 bootstrap replications, the estimated optimism (<0.005) shows good discrimination in validation of prediction model.A risk prediction model for CRC based on a series of symptoms and signs related to enteric diseases in combination with a FIT was developed from first round of screening. The results of the current study are useful for increasing the awareness of high-risk subjects and for individual-risk-guided invitations or strategies to achieve mass screening for CRC.
Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges
2009-02-13
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.
ERIC Educational Resources Information Center
Ferrer, Emilio; Hamagami, Fumiaki; McArdle, John J.
2004-01-01
This article offers different examples of how to fit latent growth curve (LGC) models to longitudinal data using a variety of different software programs (i.e., LISREL, Mx, Mplus, AMOS, SAS). The article shows how the same model can be fitted using both structural equation modeling and multilevel software, with nearly identical results, even in…
NASA Astrophysics Data System (ADS)
Gentile, G.; Famaey, B.; de Blok, W. J. G.
2011-03-01
We present an analysis of 12 high-resolution galactic rotation curves from The HI Nearby Galaxy Survey (THINGS) in the context of modified Newtonian dynamics (MOND). These rotation curves were selected to be the most reliable for mass modelling, and they are the highest quality rotation curves currently available for a sample of galaxies spanning a wide range of luminosities. We fit the rotation curves with the "simple" and "standard" interpolating functions of MOND, and we find that the "simple" function yields better results. We also redetermine the value of a0, and find a median value very close to the one determined in previous studies, a0 = (1.22 ± 0.33) × 10-8 cm s-2. Leaving the distance as a free parameter within the uncertainty of its best independently determined value leads to excellent quality fits for 75% of the sample. Among the three exceptions, two are also known to give relatively poor fits in Newtonian dynamics plus dark matter. The remaining case (NGC 3198) presents some tension between the observations and the MOND fit, which might, however, be explained by the presence of non-circular motions, by a small distance, or by a value of a0 at the lower end of our best-fit interval, 0.9 × 10-8 cm s-2. The best-fit stellar M/L ratios are generally in remarkable agreement with the predictions of stellar population synthesis models. We also show that the narrow range of gravitational accelerations found to be generated by dark matter in galaxies is consistent with the narrow range of additional gravity predicted by MOND.
Characterization of viscoelastic response and damping of composite materials used in flywheel rotors
NASA Astrophysics Data System (ADS)
Chen, Jianmin
The long-term goal for spacecraft flywheel systems with higher energy density at the system level requires new and innovative composite material concepts. Multi-Direction Composite (MDC) offers significant advantages over traditional filament-wound and multi-ring press-fit filament-wound wheels in providing higher energy density (i.e., less mass), better crack resistance, and enhanced safety. However there is a lack of systematic characterization for dynamic properties of MDC composite materials. In order to improve the flywheel materials reliability, durability and life time, it is very important to evaluate the time dependent aging effects and damping properties of MDC material, which are significant dynamic parameter for vibration and sound control, fatigue endurance, and impact resistance. The physical aging effects are quantified based on a set of creep curves measured at different aging time or different aging temperature. One parameter (tau) curve fit was proposed to represent the relationship of aging time and aging temperature between different master curves. The long term mechanical behavior was predicted by obtained master curves. The time and temperature shift factors of matrix were obtained from creep curves and the aging time shift rate were calculated. The aging effects on composite are obtained from experiments and compared with prediction. The mechanical quasi-behavior of MDC composite was analyzed. The correspondence principle was used to relate quasi-static elastic properties of composite materials to time-dependent properties of its constituent materials (i.e., fiber and matrix). The Prony series combined with the multi-data fitting method was applied to inverse Laplace transform and to calculate the time dependent stiffness matrix effectively. Accelerated time-dependent deformation of two flywheel rim designs were studied for a period equivalent to 31 years and are compared with hoop reinforcement only composite. Damping of pure resin and T700/epoxy composite lamina and laminate in longitudinal and transverse directions were investigated experimentally and analytically. The effect of aging on damping was also studied by placing samples at 60°C in an oven for extended periods. Damping master curves versus frequency were constructed from individual curves at different temperatures based on the Arrhenius equation. The damping response of the composite lamina was used to predict the response of laminate composites. Analytical results give close numerical values to experimental results from damping of cantilever beam laminated composite samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Neil, E-mail: neil.richmond@stees.nhs.uk; Brackenridge, Robert
2014-04-01
Tissue-phantom ratios (TPRs) are a common dosimetric quantity used to describe the change in dose with depth in tissue. These can be challenging and time consuming to measure. The conversion of percentage depth dose (PDD) data using standard formulae is widely employed as an alternative method in generating TPR. However, the applicability of these formulae for small fields has been questioned in the literature. Functional representation has also been proposed for small-field TPR production. This article compares measured TPR data for small 6 MV photon fields against that generated by conversion of PDD using standard formulae to assess the efficacymore » of the conversion data. By functionally fitting the measured TPR data for square fields greater than 4 cm in length, the TPR curves for smaller fields are generated and compared with measurements. TPRs and PDDs were measured in a water tank for a range of square field sizes. The PDDs were converted to TPRs using standard formulae. TPRs for fields of 4 × 4 cm{sup 2} and larger were used to create functional fits. The parameterization coefficients were used to construct extrapolated TPR curves for 1 × 1 cm{sup 2}, 2 × 2-cm{sup 2}, and 3 × 3-cm{sup 2} fields. The TPR data generated using standard formulae were in excellent agreement with direct TPR measurements. The TPR data for 1 × 1-cm{sup 2}, 2 × 2-cm{sup 2}, and 3 × 3-cm{sup 2} fields created by extrapolation of the larger field functional fits gave inaccurate initial results. The corresponding mean differences for the 3 fields were 4.0%, 2.0%, and 0.9%. Generation of TPR data using a standard PDD-conversion methodology has been shown to give good agreement with our directly measured data for small fields. However, extrapolation of TPR data using the functional fit to fields of 4 × 4 cm{sup 2} or larger resulted in generation of TPR curves that did not compare well with the measured data.« less
Li, Dong-Sheng; Xu, Hui-Mian; Han, Chun-Qi; Li, Ya-Ming
2010-01-01
AIM: To determine the effect of three digestive tract reconstruction procedures on pouch function, after radical surgery undertaken because of gastric cancer, as assessed by radionuclide dynamic imaging. METHODS: As a measure of the reservoir function, with a designed diet containing technetium-99m (99mTc), the emptying time of the gastric substitute was evaluated using a 99mTc-labeled solid test meal. Immediately after the meal, the patient was placed in front of a γ camera in a supine position and the radioactivity was measured over the whole abdomen every minute. A frame image was obtained. The emptying sequences were recorded by the microprocessor and then stored on a computer disk. According to a computer processing system, the half-emptying actual curve and the fitting curve of food containing isotope in the detected region were depicted, and the half-emptying actual curves of the three reconstruction procedures were directly compared. RESULTS: Of the three reconstruction procedures, the half-emptying time of food containing isotope in the Dual Braun type esophagojejunal anastomosis procedure (51.86 ± 6.43 min) was far closer to normal, significantly better than that of the proximal gastrectomy orthotopic reconstruction (30.07 ± 15.77 min, P = 0.002) and P type esophagojejunal anastomosis (27.88 ± 6.07 min, P = 0.001) methods. The half-emptying actual curve and fitting curves for the Dual Braun type esophagojejunal anastomosis were fairly similar while those of the proximal gastrectomy orthotopic reconstruction and P type esophagojejunal anastomosis were obviously separated, which indicated bad food conservation in the reconstructed pouches. CONCLUSION: Dual Braun type esophagojejunal anastomosis is the most useful of the three procedures for improving food accommodation in patients with a pouch and can retard evacuation of solid food from the reconstructed pouch. PMID:20238408
Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.
Kervrann, C; Legland, D; Pardini, L
2004-06-01
Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.
Simplified estimation of age-specific reference intervals for skewed data.
Wright, E M; Royston, P
1997-12-30
Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.
Quasar Accretion Disk Sizes With Continuum Reverberation Mapping From the Dark Energy Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudd, D.; et al.
We present accretion disk size measurements for 15 luminous quasars atmore » $$0.7 \\leq z \\leq 1.9$$ derived from $griz$ light curves from the Dark Energy Survey. We measure the disk sizes with continuum reverberation mapping using two methods, both of which are derived from the expectation that accretion disks have a radial temperature gradient and the continuum emission at a given radius is well-described by a single blackbody. In the first method we measure the relative lags between the multiband light curves, which provides the relative time lag between shorter and longer wavelength variations. The second method fits the model parameters for the canonical Shakura-Sunyaev thin disk directly rather than solving for the individual time lags between the light curves. Our measurements demonstrate good agreement with the sizes predicted by this model for accretion rates between 0.3-1 times the Eddington rate. These results are also in reasonable agreement with disk size measurements from gravitational microlensing studies of strongly lensed quasars, as well as other photometric reverberation mapping results.« less
ERIC Educational Resources Information Center
Chen, Chau-Kuang
2005-01-01
Logistic and Cox regression methods are practical tools used to model the relationships between certain student learning outcomes and their relevant explanatory variables. The logistic regression model fits an S-shaped curve into a binary outcome with data points of zero and one. The Cox regression model allows investigators to study the duration…
Li, Chuang; Cordovilla, Francisco; Jagdheesh, R.
2018-01-01
This paper presents a novel structural piezoresistive pressure sensor with four-grooved membrane combined with rood beam to measure low pressure. In this investigation, the design, optimization, fabrication, and measurements of the sensor are involved. By analyzing the stress distribution and deflection of sensitive elements using finite element method, a novel structure featuring high concentrated stress profile (HCSP) and locally stiffened membrane (LSM) is built. Curve fittings of the mechanical stress and deflection based on FEM simulation results are performed to establish the relationship between mechanical performance and structure dimension. A combination of FEM and curve fitting method is carried out to determine the structural dimensions. The optimized sensor chip is fabricated on a SOI wafer by traditional MEMS bulk-micromachining and anodic bonding technology. When the applied pressure is 1 psi, the sensor achieves a sensitivity of 30.9 mV/V/psi, a pressure nonlinearity of 0.21% FSS and an accuracy of 0.30%, and thereby the contradiction between sensitivity and linearity is alleviated. In terms of size, accuracy and high temperature characteristic, the proposed sensor is a proper choice for measuring pressure of less than 1 psi. PMID:29393916
Mapping conduction velocity of early embryonic hearts with a robust fitting algorithm
Gu, Shi; Wang, Yves T; Ma, Pei; Werdich, Andreas A; Rollins, Andrew M; Jenkins, Michael W
2015-01-01
Cardiac conduction maturation is an important and integral component of heart development. Optical mapping with voltage-sensitive dyes allows sensitive measurements of electrophysiological signals over the entire heart. However, accurate measurements of conduction velocity during early cardiac development is typically hindered by low signal-to-noise ratio (SNR) measurements of action potentials. Here, we present a novel image processing approach based on least squares optimizations, which enables high-resolution, low-noise conduction velocity mapping of smaller tubular hearts. First, the action potential trace measured at each pixel is fit to a curve consisting of two cumulative normal distribution functions. Then, the activation time at each pixel is determined based on the fit, and the spatial gradient of activation time is determined with a two-dimensional (2D) linear fit over a square-shaped window. The size of the window is adaptively enlarged until the gradients can be determined within a preset precision. Finally, the conduction velocity is calculated based on the activation time gradient, and further corrected for three-dimensional (3D) geometry that can be obtained by optical coherence tomography (OCT). We validated the approach using published activation potential traces based on computer simulations. We further validated the method by adding artificially generated noise to the signal to simulate various SNR conditions using a curved simulated image (digital phantom) that resembles a tubular heart. This method proved to be robust, even at very low SNR conditions (SNR = 2-5). We also established an empirical equation to estimate the maximum conduction velocity that can be accurately measured under different conditions (e.g. sampling rate, SNR, and pixel size). Finally, we demonstrated high-resolution conduction velocity maps of the quail embryonic heart at a looping stage of development. PMID:26114034
Method and Apparatus for Evaluating Multilayer Objects for Imperfections
NASA Technical Reports Server (NTRS)
Heyman, Joseph S. (Inventor); Abedin, Nurul (Inventor); Sun, Kuen J. (Inventor)
1999-01-01
A multilayer object having multiple layers arranged in a stacking direction is evaluated for imperfections such as voids, delaminations and microcracks. First. an acoustic wave is transmitted into the object in the stacking direction via an appropriate transducer/waveguide combination. The wave propagates through the multilayer object and is received by another transducer/waveguide combination preferably located on the same surface as the transmitting combination. The received acoustic wave is correlated with the presence or absence of imperfections by, e.g., generating pulse echo signals indicative of the received acoustic wave. wherein the successive signals form distinct groups over time. The respective peak amplitudes of each group are sampled and curve fit to an exponential curve. wherein a substantial fit of approximately 80-90% indicates an absence of imperfections and a significant deviation indicates the presence of imperfections. Alternatively, the time interval between distinct groups can be measured. wherein equal intervals indicate the absence of imperfections and unequal intervals indicate the presence of imperfections.
Method and apparatus for evaluating multilayer objects for imperfections
NASA Technical Reports Server (NTRS)
Heyman, Joseph S. (Inventor); Abedin, Nurul (Inventor); Sun, Kuen J. (Inventor)
1997-01-01
A multilayer object having multiple layers arranged in a stacking direction is evaluated for imperfections such as voids, delaminations and microcracks. First, an acoustic wave is transmitted into the object in the stacking direction via an appropriate transducer/waveguide combination. The wave propagates through the multilayer object and is received by another transducer/waveguide combination preferably located on the same surface as the transmitting combination. The received acoustic wave is correlated with the presence or absence of imperfections by, e.g., generating pulse echo signals indicative of the received acoustic wave, wherein the successive signals form distinct groups over time. The respective peak amplitudes of each group are sampled and curve fit to an exponential curve, wherein a substantial fit of approximately 80-90% indicates an absence of imperfections and a significant deviation indicates the presence of imperfections. Alternatively, the time interval between distinct groups can be measured, wherein equal intervals indicate the absence of imperfections and unequal intervals indicate the presence of imperfections.
Non-Boltzmann Modeling for Air Shock-Layer Radiation at Lunar-Return Conditions
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth
2008-01-01
This paper investigates the non-Boltzmann modeling of the radiating atomic and molecular electronic states present in lunar-return shock-layers. The Master Equation is derived for a general atom or molecule while accounting for a variety of excitation and de-excitation mechanisms. A new set of electronic-impact excitation rates is compiled for N, O, and N2+, which are the main radiating species for most lunar-return shock-layers. Based on these new rates, a novel approach of curve-fitting the non-Boltzmann populations of the radiating atomic and molecular states is developed. This new approach provides a simple and accurate method for calculating the atomic and molecular non-Boltzmann populations while avoiding the matrix inversion procedure required for the detailed solution of the Master Equation. The radiative flux values predicted by the present detailed non-Boltzmann model and the approximate curve-fitting approach are shown to agree within 5% for the Fire 1634 s case.
NASA Technical Reports Server (NTRS)
Vukobratovich, Daniel; Richard, Ralph M.; Valente, Tina M.; Cho, Myung K.
1990-01-01
Scaling laws for light-weight optical systems are examined. A cubic relationship between mirror diameter and weight has been suggested and used by many designers of optical systems as the best description for all light-weight mirrors. A survey of existing light-weight systems in the open literature was made to clarify this issue. Fifty existing optical systems were surveyed with all varieties of light-weight mirrors including glass and beryllium structured mirrors, contoured mirrors, and very thin solid mirrors. These mirrors were then categorized and weight to diameter ratio was plotted to find a best curve for each case. A best fitting curve program tests nineteen different equations and ranks a goodness-to-fit for each of these equations. The resulting relationship found for each light-weight mirror category helps to quantify light-weight optical systems and methods of fabrication and provides comparisons between mirror types.
An operational modal analysis method in frequency and spatial domain
NASA Astrophysics Data System (ADS)
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Liquid ingress recognition in honeycomb structure by pulsed thermography
NASA Astrophysics Data System (ADS)
Chen, Dapeng; Zeng, Zhi; Tao, Ning; Zhang, Cunlin; Zhang, Zheng
2013-05-01
Pulsed thermography has been proven to be a fast and effective method to detect fluid ingress in aircraft honeycomb structure; however, water and hydraulic oil may have similar appearance in the thermal image sequence. It is meaningful to identify what kind of liquid ingress it is for aircraft maintenance. In this study, honeycomb specimens with glass fiber and aluminum skin are injected different kinds of liquids: water and oil. Pulsed thermography is adopted; a recognition method is proposed to first get the reference curve by linear fitting the beginning of the logarithmic curve, and then an algorithm based on the thermal contrast between liquid and reference is used to recognize what kind of fluid it is by calculating their thermal properties. It is verified with the results of theory and the finite element simulation.
NASA Astrophysics Data System (ADS)
Miyake, Susumu; Kasashima, Takashi; Yamazaki, Masato; Okimura, Yasuyuki; Nagata, Hajime; Hosaka, Hiroshi; Morita, Takeshi
2018-07-01
The high power properties of piezoelectric transducers were evaluated considering a complex nonlinear elastic constant. The piezoelectric LCR equivalent circuit with nonlinear circuit parameters was utilized to measure them. The deformed admittance curve of piezoelectric transducers was measured under a high stress and the complex nonlinear elastic constant was calculated by curve fitting. Transducers with various piezoelectric materials, Pb(Zr,Ti)O3, (K,Na)NbO3, and Ba(Zr,Ti)O3–(Ba,Ca)TiO3, were investigated by the proposed method. The measured complex nonlinear elastic constant strongly depends on the linear elastic and piezoelectric constants. This relationship indicates that piezoelectric high power properties can be controlled by modifying the linear elastic and piezoelectric constants.
Development of a program to fit data to a new logistic model for microbial growth.
Fujikawa, Hiroshi; Kano, Yoshihiro
2009-06-01
Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.
A HYBRID MODE MODEL OF THE BLAZHKO EFFECT, SHOWN TO ACCURATELY FIT KEPLER DATA FOR RR Lyr
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryant, Paul H., E-mail: pbryant@ucsd.edu
2014-03-01
The waveform for Blazhko stars can be substantially different during the ascending and descending parts of the Blazhko cycle. A hybrid model, consisting of two component oscillators of the same frequency, is proposed as a means to fit the data over the entire cycle. One component exhibits a sawtooth-like velocity waveform while the other is nearly sinusoidal. One method of generating such a hybrid is presented: a nonlinear model is developed for the first overtone mode, which, if excited to large amplitude, is found to drop strongly in frequency and become highly non-sinusoidal. If the frequency drops sufficiently to becomemore » equal to the fundamental frequency, the two can become phase locked and form the desired hybrid. A relationship is assumed between the hybrid mode velocity and the observed light curve, which is approximated as a power series. An accurate fit of the hybrid model is made to actual Kepler data for RR Lyr. The sinusoidal component may tend to stabilize the period of the hybrid which is found in real Blazhko data to be extremely stable. It is proposed that the variations in amplitude and phase might result from a nonlinear interaction with a third mode, possibly a nonradial mode at 3/2 the fundamental frequency. The hybrid model also applies to non-Blazhko RRab stars and provides an explanation for the light curve bump. A method to estimate the surface gravity is also proposed.« less
Zhu, Mingping; Chen, Aiqing
2017-01-01
This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580
A Multi-year Multi-passband CCD Photometric Study of the W UMa Binary EQ Tauri
NASA Astrophysics Data System (ADS)
Alton, K. B.
2009-12-01
A revised ephemeris and updated orbital period for EQ Tau have been determined from newly acquired (2007-2009) CCD-derived photometric data. A Roche-type model based on the Wilson-Devinney code produced simultaneous theoretical fits of light curve data in three passbands by invoking cold spots on the primary component. These new model fits, along with similar light curve data for EQ Tau collected during the previous six seasons (2000-2006), provided a rare opportunity to follow the seasonal appearance of star spots on a W UMa binary system over nine consecutive years. Fixed values for q, ?1,2, T1, T2, and i based upon the mean of eleven separately determined model fits produced for this system are hereafter proposed for future light curve modeling of EQ Tau. With the exception of the 2001 season all other light curves produced since then required a spotted solution to address the flux asymmetry exhibited by this binary system at Max I and Max II. At least one cold spot on the primary appears in seven out of twelve light curves for EQ Tau produced over the last nine years, whereas in six instances two cold spots on the primary star were invoked to improve the model fit. Solutions using a hot spot were less common and involved positioning a single spot on the primary constituent during the 2001-2002, 2002-2003, and 2005-2006 seasons.
Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data
NASA Astrophysics Data System (ADS)
Bogdanova, Nina; Koleva, Mihaela
2018-02-01
Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.
NASA Astrophysics Data System (ADS)
Starkey, David; Agn Storm Team
2015-01-01
Reverberation mapping is a proven method for obtaining black hole mass estimates and constraining the size of the BLR. We analyze multi-wavelength continuum light curves from the 7 month AGN STORM monitoring of NGC 5548 and use reverberation mapping to model the accretion disk time delays. The model fits the light curves at UV to IR wavelengths assuming reprocessing on a flat, steady-state blackbody accretion disk. We calculate the inclination-dependent transfer function and investigate to what extent our model can determine the disk inclination, black hole MMdot and power law index of the disc temperature-radius relation.
Free-form surface measuring method based on optical theodolite measuring system
NASA Astrophysics Data System (ADS)
Yu, Caili
2012-10-01
The measurement for single-point coordinate, length and large-dimension curved surface in industrial measurement can be achieved through forward intersection measurement by the theodolite measuring system composed of several optical theodolites and one computer. The measuring principle of flexible large-dimension three-coordinate measuring system made up of multiple (above two) optical theodolites and composition and functions of the system have been introduced in this paper. Especially for measurement of curved surface, 3D measured data of spatial free-form surface is acquired through the theodolite measuring system and the CAD model is formed through surface fitting to directly generate CAM processing data.
Investigation of activation cross-sections of deuteron induced reactions on vanadium up to 40 MeV
NASA Astrophysics Data System (ADS)
Tárkányi, F.; Ditrói, F.; Takács, S.; Hermanne, A.; Baba, M.; Ignatyuk, A. V.
2011-08-01
Experimental excitation functions for deuteron induced reactions up to 40 MeV on natural vanadium were measured with the activation method using a stacked foil irradiation technique. From high resolution gamma spectrometry cross-section data for the production of 51Cr, 48V, 48,47,46Sc and 47Ca were determined. Comparisons with the earlier published data are presented and results for values predicted by different theoretical codes are included. Thick target yields were calculated from a fit to our experimental excitation curves and compared with the earlier experimental data. Depth distribution curves used for thin layer activation (TLA) are also presented.
Analysis of the multigroup model for muon tomography based threat detection
NASA Astrophysics Data System (ADS)
Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.
2014-02-01
We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.
Inversion method applied to the rotation curves of galaxies
NASA Astrophysics Data System (ADS)
Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.
2017-07-01
We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.
Effective Clipart Image Vectorization through Direct Optimization of Bezigons.
Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian
2016-02-01
Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.
Fitting Photometry of Blended Microlensing Events
NASA Astrophysics Data System (ADS)
Thomas, Christian L.; Griest, Kim
2006-03-01
We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2
NASA Astrophysics Data System (ADS)
Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.
2017-12-01
In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.
NASA Astrophysics Data System (ADS)
Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang
2018-03-01
A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.
Photographic photometry with Iris diaphragm photometers
NASA Technical Reports Server (NTRS)
Schaefer, B. E.
1981-01-01
A general method is presented for solving problems encountered in the analysis of Iris diaphragm photometer (IDP) data. The method is used to derive the general shape of the calibration curve, allowing both a more accurate fit to the IDP data for comparison stars and extrapolation to magnitude ranges for which no comparison stars are measured. The profile of starlight incident and the characteristic curve of the plate are both assumed and then used to derive the profile of the star image. An IDP reading is then determined for each star image. A procedure for correcting the effects of a nonconstant background fog level on the plate is also demonstrated. Additional applications of the method are made in the appendix to determine the relation between the radius of a photographic star image and the star's magnitude, and to predict the IDP reading of the 'point of optimum density'.
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Automatic focusing system of BSST in Antarctic
NASA Astrophysics Data System (ADS)
Tang, Peng-Yi; Liu, Jia-Jing; Zhang, Guang-yu; Wang, Jian
2015-10-01
Automatic focusing (AF) technology plays an important role in modern astronomical telescopes. Based on the focusing requirement of BSST (Bright Star Survey Telescope) in Antarctic, an AF system is set up. In this design, functions in OpenCV is used to find stars, the algorithm of area, HFD or FWHM are used to degree the focus metric by choosing. Curve fitting method is used to find focus position as the method of camera moving. All these design are suitable for unattended small telescope.
Improved Cluster Method Applied to the InSAR data of the 2007 Piton de la Fournaise eruption
NASA Astrophysics Data System (ADS)
Cayol, V.; Augier, A.; Froger, J. L.; Menassian, S.
2016-12-01
Interpretation of surface displacement induced by reservoirs, whether magmatic, hydrothermal or gaseous, can be done at reduced numerical cost and with little a priori knowledge using cluster methods, where reservoirs are represented by point sources embedded in an elastic half-space. Most of the time, the solution representing the best trade-off between the data fit and the model smoothness (L-curve criterion) is chosen. This study relies on synthetic tests to improve cluster methods in several ways. Firstly, to solve problems involving steep topographies, we construct unit sources numerically. Secondly, we show that the L-curve criterion leads to several plausible solutions where the most realistic are not necessarily the best fitting. We determine that the cross-validation method, with data geographically grouped, is a more reliable way to determine the solution. Thirdly, we propose a new method, based on source ranking according to their contribution and minimization of the Akaike information criteria, to retrieve reservoirs' geometry more accurately and to better reflect information contained in the data. We show that the solution is robust in the presence of correlated noise and that reservoir complexity that can be retrieved decreases with increasing noise. We also show that it is inappropriate to use cluster methods for pressurized fractures. Finally, the method is applied to the summit deflation recorded by InSAR after the caldera collapse which occurred at Piton de la Fournaise in April 2007. Comparison with other data indicate that the deflation is probably related to poro-elastic compaction and fluid flow subsequent to the crater collapse.
Point and path performance of light aircraft: A review and analysis
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summey, D. C.; Johnson, W. D.
1973-01-01
The literature on methods for predicting the performance of light aircraft is reviewed. The methods discussed in the review extend from the classical instantaneous maximum or minimum technique to techniques for generating mathematically optimum flight paths. Classical point performance techniques are shown to be adequate in many cases but their accuracies are compromised by the need to use simple lift, drag, and thrust relations in order to get closed form solutions. Also the investigation of the effect of changes in weight, altitude, configuration, etc. involves many essentially repetitive calculations. Accordingly, computer programs are provided which can fit arbitrary drag polars and power curves with very high precision and which can then use the resulting fits to compute the performance under the assumption that the aircraft is not accelerating.
Uncertainty Analysis Principles and Methods
2007-09-01
error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden
NASA Technical Reports Server (NTRS)
Fountain, J. A.
1973-01-01
Thermal conductivity measurements of particulate materials in vacuum are presented in summary. Particulate basalt and soda lime glass beads of various size ranges were used as samples. The differentiated line heat source method was used for the measurements. A comprehensive table is shown giving all pertinent experimental conditions. Least-squares curve fits to the data are presented.
EMPIRICALLY ESTIMATED FAR-UV EXTINCTION CURVES FOR CLASSICAL T TAURI STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
McJunkin, Matthew; France, Kevin; Schindhelm, Eric
Measurements of extinction curves toward young stars are essential for calculating the intrinsic stellar spectrophotometric radiation. This flux determines the chemical properties and evolution of the circumstellar region, including the environment in which planets form. We develop a new technique using H{sub 2} emission lines pumped by stellar Ly α photons to characterize the extinction curve by comparing the measured far-ultraviolet H{sub 2} line fluxes with model H{sub 2} line fluxes. The difference between model and observed fluxes can be attributed to the dust attenuation along the line of sight through both the interstellar and circumstellar material. The extinction curvesmore » are fit by a Cardelli et al. (1989) model and the A {sub V} (H{sub 2}) for the 10 targets studied with good extinction fits range from 0.5 to 1.5 mag, with R {sub V} values ranging from 2.0 to 4.7. A {sub V} and R {sub V} are found to be highly degenerate, suggesting that one or the other needs to be calculated independently. Column densities and temperatures for the fluorescent H{sub 2} populations are also determined, with averages of log{sub 10}( N (H{sub 2})) = 19.0 and T = 1500 K. This paper explores the strengths and limitations of the newly developed extinction curve technique in order to assess the reliability of the results and improve the method in the future.« less
Elliott, J.G.; DeFeyter, K.L.
1986-01-01
Sources of sediment data collected by several government agencies through water year 1984 are summarized for Colorado. The U.S. Geological Survey has collected suspended-sediment data at 243 sites; these data are stored in the U.S. Geological Survey 's water data storage and retrieval system. The U.S. Forest Service has collected suspended-sediment and bedload data at an additional 225 sites, and most of these data are stored in the U.S. Environmental Protection Agency 's water-quality-control information system. Additional unpublished sediment data are in the possession of the collecting entities. Annual suspended-sediment loads were computed for 133 U.S. Geological Survey sediment-data-collection sites using the daily mean water-discharge/sediment-transport-curve method. Sediment-transport curves were derived for each site by one of three techniques: (1) Least-squares linear regression of all pairs of suspended-sediment and corresponding water-discharge data, (2) least-squares linear regression of data sets subdivided on the basis of hydrograph season; and (3) graphical fit to a logarithm-logarithm plot of data. The curve-fitting technique used for each site depended on site-specific characteristics. Sediment-data sources and estimates of annual loads of suspended, bed, and total sediment from several other reports also are summarized. (USGS)
Species area relationships in mediterranean-climate plant communities
Keeley, Jon E.; Fotheringham, C.J.
2003-01-01
Aim To determine the best-fit model of species–area relationships for Mediterranean-type plant communities and evaluate how community structure affects these species–area models.Location Data were collected from California shrublands and woodlands and compared with literature reports for other Mediterranean-climate regions.Methods The number of species was recorded from 1, 100 and 1000 m2 nested plots. Best fit to the power model or exponential model was determined by comparing adjusted r2 values from the least squares regression, pattern of residuals, homoscedasticity across scales, and semi-log slopes at 1–100 m2 and 100–1000 m2. Dominance–diversity curves were tested for fit to the lognormal model, MacArthur's broken stick model, and the geometric and harmonic series.Results Early successional Western Australia and California shrublands represented the extremes and provide an interesting contrast as the exponential model was the best fit for the former, and the power model for the latter, despite similar total species richness. We hypothesize that structural differences in these communities account for the different species–area curves and are tied to patterns of dominance, equitability and life form distribution. Dominance–diversity relationships for Western Australian heathlands exhibited a close fit to MacArthur's broken stick model, indicating more equitable distribution of species. In contrast, Californian shrublands, both postfire and mature stands, were best fit by the geometric model indicating strong dominance and many minor subordinate species. These regions differ in life form distribution, with annuals being a major component of diversity in early successional Californian shrublands although they are largely lacking in mature stands. Both young and old Australian heathlands are dominated by perennials, and annuals are largely absent. Inherent in all of these ecosystems is cyclical disequilibrium caused by periodic fires. The potential for community reassembly is greater in Californian shrublands where only a quarter of the flora resprout, whereas three quarters resprout in Australian heathlands.Other Californian vegetation types sampled include coniferous forests, oak savannas and desert scrub, and demonstrate that different community structures may lead to a similar species–area relationship. Dominance–diversity relationships for coniferous forests closely follow a geometric model whereas associated oak savannas show a close fit to the lognormal model. However, for both communities, species–area curves fit a power model. The primary driver appears to be the presence of annuals. Desert scrub communities illustrate dramatic changes in both species diversity and dominance–diversity relationships in high and low rainfall years, because of the disappearance of annuals in drought years.Main conclusions Species–area curves for immature shrublands in California and the majority of Mediterranean plant communities fit a power function model. Exceptions that fit the exponential model are not because of sampling error or scaling effects, rather structural differences in these communities provide plausible explanations. The exponential species–area model may arise in more than one way. In the highly diverse Australian heathlands it results from a rapid increase in species richness at small scales. In mature California shrublands it results from very depauperate richness at the community scale. In both instances the exponential model is tied to a preponderance of perennials and paucity of annuals. For communities fit by a power model, coefficients z and log c exhibit a number of significant correlations with other diversity parameters, suggesting that they have some predictive value in ecological communities.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Second derivative in the model of classical binary system
NASA Astrophysics Data System (ADS)
Abubekerov, M. K.; Gostev, N. Yu.
2016-06-01
We have obtained an analytical expression for the second derivatives of the light curve with respect to geometric parameters in the model of eclipsing classical binary systems. These expressions are essentially efficient algorithm to calculate the numerical values of these second derivatives for all physical values of geometric parameters. Knowledge of the values of second derivatives of the light curve at some point provides additional information about asymptotical behaviour of the function near this point and can significantly improve the search for the best-fitting light curve through the use of second-order optimization method. We write the expression for the second derivatives in a form which is most compact and uniform for all values of the geometric parameters and so make it easy to write a computer program to calculate the values of these derivatives.
Peng, Rong-fei; He, Jia-yao; Zhang, Zhan-xia
2002-02-01
The performances of a self-constructed visible AOTF spectrophotometer are presented. The wavelength calibration of AOTF1 and AOTF2 are performed with a didymium glass using a fourth-order polynomial curve fitting method. The absolute error of the peak position is usually less than 0.7 nm. Compared with the commercial UV1100 spectrophotometer, the scanning speed of the AOTF spectrophotometer is much more faster, but the resolution depends on the quality of AOTF. The absorption spectra and the calibration curves of copper sulfate and alizarin red obtained with AOTF1(Institute for Silicate, Shanghai China) and AOTF2 (Brimrose U.S.A) respectively are presented. Their corresponding correlation coefficients of the calibration curves are 0.9991 and 0.9990 respectively. Preliminary results show that the self-constructed AOTF spectrophotometer is feasible.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
A Gaussian method to improve work-of-breathing calculations.
Petrini, M F; Evans, J N; Wall, M A; Norman, J R
1995-01-01
The work of breathing is a calculated index of pulmonary function in ventilated patients that may be useful in deciding when to wean and when to extubate. However, the accuracy of the calculated work of breathing of the patient (WOBp) can suffer from artifacts introduced by coughing, swallowing, and other non-breathing maneuvers. The WOBp in this case will include not only the usual work of inspiration, but also the work of performing these non-breathing maneuvers. The authors developed a method to objectively eliminate the calculated work of these movements from the work of breathing, based on fitting to a Gaussian curve the variable P, which is obtained from the difference between the esophageal pressure change and the airway pressure change during each breath. In spontaneously breathing adults the normal breaths fit the Gaussian curve, while breaths that contain non-breathing maneuvers do not. In this Gaussian breath-elimination method (GM), breaths that are two standard deviations from that mean obtained by the fit are eliminated. For normally breathing control adult subjects, GM had little effect on WOBp, reducing it from 0.49 to 0.47 J/L (n = 8), while there was a 40% reduction in the coefficient of variation. Non-breathing maneuvers were simulated by coughing, which increased WOBp to 0.88 (n = 6); with the GM correction, WOBp was 0.50 J/L, a value not significantly different from that of normal breathing. Occlusion also increased WOBp to 0.60 J/L, but GM-corrected WOBp was 0.51 J/L, a normal value. As predicted, doubling the respiratory rate did not change the WOBp before or after the GM correction.(ABSTRACT TRUNCATED AT 250 WORDS)
Observational evidence of dust evolution in galactic extinction curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecchi-Pestellini, Cesare; Casu, Silvia; Mulas, Giacomo
Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds.more » Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.« less
UTM, a universal simulator for lightcurves of transiting systems
NASA Astrophysics Data System (ADS)
Deeg, Hans
2009-02-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. Applications of UTM to date have been mainly in the generation of light-curves for the testing of detection algorithms. For the preparation of such test for the Corot Mission, a special version has been used to generate multicolour light-curves in Corot's passbands. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
The effect of semirigid dressings on below-knee amputations.
MacLean, N; Fick, G H
1994-07-01
The effect of using semirigid dressings (SRDs) on the residual limb of individuals who have had below-knee amputations as a consequence of peripheral vascular disease was investigated, with the primary question being: Does the time to readiness for prosthetic fitting for patients treated with the SRDs differ from that of patients treated with soft dressings? Forty patients entered the study and were alternately assigned to one of two groups. Nineteen patients were assigned to the SRD group, and 21 patients were assigned to the soft dressing group. The time from surgery to readiness for prosthetic fitting was recorded for each patient. Kaplan-Meier survival curves were generated for each group, and the results were analyzed with the log-rank test. There was a difference between the two curves, and an examination of the curves suggests that the expected time to readiness for prosthetic fitting for patients treated with the SRDs would be less than half that of patients treated with soft dressings. The results suggest that a patient may be ready for prosthetic fitting sooner if treated with SRDs instead of soft dressings.
Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.
Jiang, Zhixing; Zhang, David; Lu, Guangming
2018-04-19
Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.
Linhart, S. Mike; Nania, Jon F.; Christiansen, Daniel E.; Hutchinson, Kasey J.; Sanders, Curtis L.; Archfield, Stacey A.
2013-01-01
A variety of individuals from water resource managers to recreational users need streamflow information for planning and decisionmaking at locations where there are no streamgages. To address this problem, two statistically based methods, the Flow Duration Curve Transfer method and the Flow Anywhere method, were developed for statewide application and the two physically based models, the Precipitation Runoff Modeling-System and the Soil and Water Assessment Tool, were only developed for application for the Cedar River Basin. Observed and estimated streamflows for the two methods and models were compared for goodness of fit at 13 streamgages modeled in the Cedar River Basin by using the Nash-Sutcliffe and the percent-bias efficiency values. Based on median and mean Nash-Sutcliffe values for the 13 streamgages the Precipitation Runoff Modeling-System and Soil and Water Assessment Tool models appear to have performed similarly and better than Flow Duration Curve Transfer and Flow Anywhere methods. Based on median and mean percent bias values, the Soil and Water Assessment Tool model appears to have generally overestimated daily mean streamflows, whereas the Precipitation Runoff Modeling-System model and statistical methods appear to have underestimated daily mean streamflows. The Flow Duration Curve Transfer method produced the lowest median and mean percent bias values and appears to perform better than the other models.
NASA Astrophysics Data System (ADS)
Möginger, B.; Kehret, L.; Hausnerova, B.; Steinhaus, J.
2016-05-01
3D-Printing is an efficient method in the field of additive manufacturing. In order to optimize the properties of manufactured parts it is essential to adapt the curing behavior of the resin systems with respect to the requirements. Thus, effects of resin composition, e.g. due to different additives such as thickener and curing agents, on the curing behavior have to be known. As the resin transfers from a liquid to a solid glass the time dependent ion viscosity was measured using DEA with flat IDEX sensors. This allows for a sensitive measurement of resin changes as the ion viscosity changes two to four decades. The investigated resin systems are based on the monomers styrene and HEMA. To account for the effects of copolymerization in the calculation of the reaction kinetics it was assumed that the reaction can be considered as a homo-polymerization having a reaction order n≠1. Then the measured ion viscosity curves are fitted with the solution of the reactions kinetics - the time dependent degree of conversion (DC-function) - for times exceeding the initiation phase representing the primary curing. The measured ion viscosity curves can nicely be fitted with the DC-function and the determined fit parameters distinguish distinctly between the investigated resin compositions.
A nonintrusive temperature measuring system for estimating deep body temperature in bed.
Sim, S Y; Lee, W K; Baek, H J; Park, K S
2012-01-01
Deep body temperature is an important indicator that reflects human being's overall physiological states. Existing deep body temperature monitoring systems are too invasive to apply to awake patients for a long time. Therefore, we proposed a nonintrusive deep body temperature measuring system. To estimate deep body temperature nonintrusively, a dual-heat-flux probe and double-sensor probes were embedded in a neck pillow. When a patient uses the neck pillow to rest, the deep body temperature can be assessed using one of the thermometer probes embedded in the neck pillow. We could estimate deep body temperature in 3 different sleep positions. Also, to reduce the initial response time of dual-heat-flux thermometer which measures body temperature in supine position, we employed the curve-fitting method to one subject. And thereby, we could obtain the deep body temperature in a minute. This result shows the possibility that the system can be used as practical temperature monitoring system with appropriate curve-fitting model. In the next study, we would try to establish a general fitting model that can be applied to all of the subjects. In addition, we are planning to extract meaningful health information such as sleep structure analysis from deep body temperature data which are acquired from this system.
Bonnet, Benjamin; Jourdan, Franck; du Cailar, Guilhem; Fesler, Pierre
2017-08-01
End-systolic left ventricular (LV) elastance ( E es ) has been previously calculated and validated invasively using LV pressure-volume (P-V) loops. Noninvasive methods have been proposed, but clinical application remains complex. The aims of the present study were to 1 ) estimate E es according to modeling of the LV P-V curve during ejection ("ejection P-V curve" method) and validate our method with existing published LV P-V loop data and 2 ) test the clinical applicability of noninvasively detecting a difference in E es between normotensive and hypertensive subjects. On the basis of the ejection P-V curve and a linear relationship between elastance and time during ejection, we used a nonlinear least-squares method to fit the pressure waveform. We then computed the slope and intercept of time-varying elastance as well as the volume intercept (V 0 ). As a validation, 22 P-V loops obtained from previous invasive studies were digitized and analyzed using the ejection P-V curve method. To test clinical applicability, ejection P-V curves were obtained from 33 hypertensive subjects and 32 normotensive subjects with carotid tonometry and real-time three-dimensional echocardiography during the same procedure. A good univariate relationship ( r 2 = 0.92, P < 0.005) and good limits of agreement were found between the invasive calculation of E es and our new proposed ejection P-V curve method. In hypertensive patients, an increase in arterial elastance ( E a ) was compensated by a parallel increase in E es without change in E a / E es In addition, the clinical reproducibility of our method was similar to that of another noninvasive method. In conclusion, E es and V 0 can be estimated noninvasively from modeling of the P-V curve during ejection. This approach was found to be reproducible and sensitive enough to detect an expected increase in LV contractility in hypertensive patients. Because of its noninvasive nature, this methodology may have clinical implications in various disease states. NEW & NOTEWORTHY The use of real-time three-dimensional echocardiography-derived left ventricular volumes in conjunction with carotid tonometry was found to be reproducible and sensitive enough to detect expected differences in left ventricular elastance in arterial hypertension. Because of its noninvasive nature, this methodology may have clinical implications in various disease states. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Su, Ray Kai Leung; Lee, Chien-Liang
2013-06-01
This study presents a seismic fragility analysis and ultimate spectral displacement assessment of regular low-rise masonry infilled (MI) reinforced concrete (RC) buildings using a coefficient-based method. The coefficient-based method does not require a complicated finite element analysis; instead, it is a simplified procedure for assessing the spectral acceleration and displacement of buildings subjected to earthquakes. A regression analysis was first performed to obtain the best-fitting equations for the inter-story drift ratio (IDR) and period shift factor of low-rise MI RC buildings in response to the peak ground acceleration of earthquakes using published results obtained from shaking table tests. Both spectral acceleration- and spectral displacement-based fragility curves under various damage states (in terms of IDR) were then constructed using the coefficient-based method. Finally, the spectral displacements of low-rise MI RC buildings at the ultimate (or nearcollapse) state obtained from this paper and the literature were compared. The simulation results indicate that the fragility curves obtained from this study and other previous work correspond well. Furthermore, most of the spectral displacements of low-rise MI RC buildings at the ultimate state from the literature fall within the bounded spectral displacements predicted by the coefficient-based method.
Derivation of error sources for experimentally derived heliostat shapes
NASA Astrophysics Data System (ADS)
Cumpston, Jeff; Coventry, Joe
2017-06-01
Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.
Testing the cosmic anisotropy with supernovae data: Hemisphere comparison and dipole fitting
NASA Astrophysics Data System (ADS)
Deng, Hua-Kai; Wei, Hao
2018-06-01
The cosmological principle is one of the cornerstones in modern cosmology. It assumes that the universe is homogeneous and isotropic on cosmic scales. Both the homogeneity and the isotropy of the universe should be tested carefully. In the present work, we are interested in probing the possible preferred direction in the distribution of type Ia supernovae (SNIa). To our best knowledge, two main methods have been used in almost all of the relevant works in the literature, namely the hemisphere comparison (HC) method and the dipole fitting (DF) method. However, the results from these two methods are not always approximately coincident with each other. In this work, we test the cosmic anisotropy by using these two methods with the joint light-curve analysis (JLA) and simulated SNIa data sets. In many cases, both methods work well, and their results are consistent with each other. However, in the cases with two (or even more) preferred directions, the DF method fails while the HC method still works well. This might shed new light on our understanding of these two methods.
NASA Astrophysics Data System (ADS)
Lyon, Richard F.
2011-11-01
A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Murman, S. M.; Berger, M. J.
2003-01-01
This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.
Forgetting Curves: Implications for Connectionist Models
ERIC Educational Resources Information Center
Sikstrom, Sverker
2002-01-01
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…
Nonlinear Growth Models in M"plus" and SAS
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam
2009-01-01
Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…
On the Early-Time Excess Emission in Hydrogen-Poor Superluminous Supernovae
NASA Technical Reports Server (NTRS)
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; De Cia, Annalisa; Perley, Daniel A.; Quimby, Robert M.; Waldman, Roni; Sullivan, Mark; Yan, Lin; Ofek, Eran O.;
2017-01-01
We present the light curves of the hydrogen-poor super-luminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (approximately 10 days) and brightness relative to the main peak (23 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (greater than 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.
On The Early-Time Excess Emission In Hydrogen-Poor Superluminous Supernovae
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; ...
2017-01-18
Here, we present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (~10 days) and brightness relative to the main peak (2-3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration ( > 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of amore » different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less
ON THE EARLY-TIME EXCESS EMISSION IN HYDROGEN-POOR SUPERLUMINOUS SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay
2017-01-20
We present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (∼10 days) and brightness relative to the main peak (2–3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (>30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. Wemore » construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of {sup 56}Ni and {sup 56}Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less
Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.
Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J
1996-07-01
We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.
Spectral optical coherence tomography vs. fluorescein pattern for rigid gas-permeable lens fit.
Piotrowiak, Ilona; Kaluzny, Bartłomiej Jan; Danek, Beata; Chwiędacz, Adam; Sikorski, Bartosz Lukasz; Malukiewicz, Grażyna
2014-07-04
This study aimed to evaluate anterior segment spectral optical coherence tomography (AS SOCT) for assessing the lens-to-cornea fit of rigid gas-permeable (RGP) lenses. The results were verified with the fluorescein pattern method, considered the criterion standard for RGP lens alignment evaluations. Twenty-six eyes of 14 patients were enrolled in the study. Initial base curve radius (BCR) of each RGP lens was determined on the basis of keratometry readings. The fluorescein pattern and AS SOCT tomograms were evaluated, starting with an alignment fit, and subsequently, with BCR reductions in increments of 0.1 mm, up to 3 consecutive changes. AS SOCT examination was performed with the use of RTVue (Optovue, California, USA). The average BCR for alignment fits, defined according to the fluorescein pattern, was 7.8 mm (SD=0.26). Repeatability of the measurements was 18.2%. BCR reductions of 0.1, 0.2, and 0.3 mm resulted in average apical clearances detected with AS SOCT of 12.38 (SD=9.91, p<0.05), 28.79 (SD=15.39, p<0.05), and 33.25 (SD=10.60, p>0.05), respectively. BCR steepening of 0.1 mm or more led to measurable changes in lens-to-cornea fits. Although AS SOCT represents a new method of assessing lens-to-cornea fit, apical clearance detection with current commercial technology showed lower sensitivity than the fluorescein pattern assessment.
Pajic, J; Rakic, B; Jovicic, D; Milovanovic, A
2014-10-01
Biological dosimetry using chromosome damage biomarkers is a valuable dose assessment method in cases of radiation overexposure with or without physical dosimetry data. In order to estimate dose by biodosimetry, any biological dosimetry service have to have its own dose response calibration curve. This paper reveals the results obtained after irradiation of blood samples from fourteen healthy male and female volunteers in order to establish biodosimetry in Serbia and produce dose response calibration curves for dicentrics and micronuclei. Taking into account pooled data from all the donors, the resultant fitted curve for dicentrics is: Ydic=0.0009 (±0.0003)+0.0421 (±0.0042)×D+0.0602 (±0.0022)×D(2); and for micronuclei: Ymn=0.0104 (±0.0015)+0.0824 (±0.0050)×D+0.0189 (±0.0017)×D(2). Following establishment of the dose response curve, a validation experiment was carried out with four blood samples. Applied and estimated doses were in good agreement. On this basis, the results reported here give us confidence to apply both calibration curves for future biological dosimetry requirements in Serbia. Copyright © 2014 Elsevier B.V. All rights reserved.
Master curve characterization of the fracture toughness behavior in SA508 Gr.4N low alloy steels
NASA Astrophysics Data System (ADS)
Lee, Ki-Hyoung; Kim, Min-Chul; Lee, Bong-Sang; Wee, Dang-Moon
2010-08-01
The fracture toughness properties of the tempered martensitic SA508 Gr.4N Ni-Mo-Cr low alloy steel for reactor pressure vessels were investigated by using the master curve concept. These results were compared to those of the bainitic SA508 Gr.3 Mn-Mo-Ni low alloy steel, which is a commercial RPV material. The fracture toughness tests were conducted by 3-point bending with pre-cracked charpy (PCVN) specimens according to the ASTM E1921-09c standard method. The temperature dependency of the fracture toughness was steeper than those predicted by the standard master curve, while the bainitic SA508 Gr.3 steel fitted well with the standard prediction. In order to properly evaluate the fracture toughness of the Gr.4N steels, the exponential coefficient of the master curve equation was changed and the modified curve was applied to the fracture toughness test results of model alloys that have various chemical compositions. It was found that the modified curve provided a better description for the overall fracture toughness behavior and adequate T0 determination for the tempered martensitic SA508 Gr.4N steels.
Pulsar Emission Geometry and Accelerating Field Strength
NASA Technical Reports Server (NTRS)
DeCesar, Megan E.; Harding, Alice K.; Miller, M. Coleman; Kalapotharakos, Constantinos; Parent, Damien
2012-01-01
The high-quality Fermi LAT observations of gamma-ray pulsars have opened a new window to understanding the generation mechanisms of high-energy emission from these systems, The high statistics allow for careful modeling of the light curve features as well as for phase resolved spectral modeling. We modeled the LAT light curves of the Vela and CTA I pulsars with simulated high-energy light curves generated from geometrical representations of the outer gap and slot gap emission models. within the vacuum retarded dipole and force-free fields. A Markov Chain Monte Carlo maximum likelihood method was used to explore the phase space of the magnetic inclination angle, viewing angle. maximum emission radius, and gap width. We also used the measured spectral cutoff energies to estimate the accelerating parallel electric field dependence on radius. under the assumptions that the high-energy emission is dominated by curvature radiation and the geometry (radius of emission and minimum radius of curvature of the magnetic field lines) is determined by the best fitting light curves for each model. We find that light curves from the vacuum field more closely match the observed light curves and multiwavelength constraints, and that the calculated parallel electric field can place additional constraints on the emission geometry
Further Evidence for Increasing Pressure and a Non-spherical Shape in Triton's Atmosphere
NASA Astrophysics Data System (ADS)
Person, M. J.; Elliot, J. L.; McDonald, S. W.; Buie, M. W.; Dunham, E. W.; Millis, R. L.; Nye, R. A.; Olkin, C. B.; Wasserman, L. H.; Young, L. A.; Hubbard, W. B.; Hill, R.; Reitsema, H. J.; Pasachoff, J. M.; Babcock, B. A.; McConnochie, T. M.; Stone, R. C.
2000-10-01
An occultation by Triton of a star denoted as Tr176 by McDonald & Elliot (AJ 109, 1352), was observed on 1997 July 18 from various locations in Australia and North America. After an extensive prediction effort, two complete chords of the occultation were recorded by our PCCD portable data systems. These chords were combined with three others recorded by another group (Sicardy et al., BAAS 30, 1107) to provide an overall geometric solution for Triton's atmosphere at the occultation pressure. A simple circular fit to these five chords yielded a half-light radius of 1439 +/- 10 km, however least squares fitting revealed a significant deviation from the simple circular projection of a spherical atmosphere. The best fitting ellipse (a first order deviation from the circular solution) yielded a mean radius of 1440 +/- 6 km and an ellipticity of 0.040 +/- 0.003. To further characterize the non-spherical solutions to the geometric fits, methods were developed to analyze the data assuming both circular and elliptical profiles. Circular and elliptically focused light curve models corresponding to the best fitting circular and elliptical geometric solutions were fit to the data. Using these light curve fits, the mean pressure at the 1400 km radius (48 km altitude) derived from all the data was 2.23 +/- 0.28 microbar for the circular model and 2.45 +/- 0.32 microbar for the elliptical model. These pressures agree with those for the Tr180 occultation (which occurred a few months later), so these results are consistent with the conclusions of Elliot et al. (Icarus 143, 425) that Triton's surface pressure has increased from 14.0 microbar at the time of the Voyager encounter to 19.0 microbar in 1997. The mean equivalent-isothermal temperature at 1400 km was 43.6 +/- 3.7 K for the circular model and 42.0 +/- 3.6 K for the elliptical model. Within their calculated errors, the equivalent-isothermal temperatures were the same for all Triton latitudes probed.
The relationship between offspring size and fitness: integrating theory and empiricism.
Rollinson, Njal; Hutchings, Jeffrey A
2013-02-01
How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.
Beyramysoltan, Samira; Rajkó, Róbert; Abdollahi, Hamid
2013-08-12
The obtained results by soft modeling multivariate curve resolution methods often are not unique and are questionable because of rotational ambiguity. It means a range of feasible solutions equally fit experimental data and fulfill the constraints. Regarding to chemometric literature, a survey of useful constraints for the reduction of the rotational ambiguity is a big challenge for chemometrician. It is worth to study the effects of applying constraints on the reduction of rotational ambiguity, since it can help us to choose the useful constraints in order to impose in multivariate curve resolution methods for analyzing data sets. In this work, we have investigated the effect of equality constraint on decreasing of the rotational ambiguity. For calculation of all feasible solutions corresponding with known spectrum, a novel systematic grid search method based on Species-based Particle Swarm Optimization is proposed in a three-component system. Copyright © 2013 Elsevier B.V. All rights reserved.
Luminescence properties of Na2Sr2Al2PO4Cl9:Sm3+ phosphor
NASA Astrophysics Data System (ADS)
Tamboli, Sumedha; Shahare, D. I.; Dhoble, S. J.
2018-04-01
A series of Sm3+ ions doped Na2Sr2Al2PO4Cl9 phosphors were synthesized via solid state synthesis method. Photoluminescence (PL) emission spectra were obtained by keeping excitation wavelength at 406 nm. Emission spectra show three emission peaks at 563 nm, 595 nm and 644 nm. The CIE chromaticity diagram shows emission colour of the phosphor in the orange-red region of the visible spectrum, indicating that the phosphor may be useful in preparing orange light-emitting diodes. Na2Sr2Al2PO4Cl9:Sm3+ phosphors were irradiated by γ-rays from a 60Co source and β-rays from a 90Sr source. Their thermoluminescence (TL) glow curves were obtained by Nucleonix 1009I TL reader. TL Trapping parameters such as activation energy of trapped electrons and order of kinetics were obtained by using Chen's peak shape method, Glow curve fitting method and initial rise method.
On the Methodology of Studying Aging in Humans
1961-01-01
prediction of death rates The relation of death rate to age has been extensively studied for over 100 years. As an illustration recent death rates for...log death rates appear to be linear, the simpler Gompertz curve fits closely. While on this subject of the Makeham-Gompertz function, it should be...Makeham-Gompertz curve to 5 year age specific death rates . Each fitting provided estimates of the parameters a, {j, and log c for each of the five year
Wear, Keith A
2013-04-01
The presence of two longitudinal waves in poroelastic media is predicted by Biot's theory and has been confirmed experimentally in through-transmission measurements in cancellous bone. Estimation of attenuation coefficients and velocities of the two waves is challenging when the two waves overlap in time. The modified least squares Prony's (MLSP) method in conjuction with curve-fitting (MLSP + CF) is tested using simulations based on published values for fast and slow wave attenuation coefficients and velocities in cancellous bone from several studies in bovine femur, human femur, and human calcaneus. The search algorithm is accelerated by exploiting correlations among search parameters. The performance of the algorithm is evaluated as a function of signal-to-noise ratio (SNR). For a typical experimental SNR (40 dB), the root-mean-square errors (RMSEs) for one example (human femur) with fast and slow waves separated by approximately half of a pulse duration were 1 m/s (slow wave velocity), 4 m/s (fast wave velocity), 0.4 dB/cm MHz (slow wave attenuation slope), and 1.7 dB/cm MHz (fast wave attenuation slope). The MLSP + CF method is fast (requiring less than 2 s at SNR = 40 dB on a consumer-grade notebook computer) and is flexible with respect to the functional form of the parametric model for the transmission coefficient. The MLSP + CF method provides sufficient accuracy and precision for many applications such that experimental error is a greater limiting factor than estimation error.
[Colorimetric characterization of LCD based on wavelength partition spectral model].
Liu, Hao-Xue; Cui, Gui-Hua; Huang, Min; Wu, Bing; Xu, Yan-Fang; Luo, Ming
2013-10-01
To establish a colorimetrical characterization model of LCDs, an experiment with EIZO CG19, IBM 19, DELL 19 and HP 19 LCDs was designed and carried out to test the interaction between RGB channels, and then to test the spectral additive property of LCDs. The RGB digital values of single channel and two channels were given and the corresponding tristimulus values were measured, then a chart was plotted and calculations were made to test the independency of RGB channels. The results showed that the interaction between channels was reasonably weak and spectral additivity property was held well. We also found that the relations between radiations and digital values at different wavelengths varied, that is, they were the functions of wavelength. A new calculation method based on piecewise spectral model, in which the relation between radiations and digital values was fitted by a cubic polynomial in each piece of wavelength with measured spectral radiation curves, was proposed and tested. The spectral radiation curves of RGB primaries with any digital values can be found out with only a few measurements and fitted cubic polynomial in this way and then any displayed color can be turned out by the spectral additivity property of primaries at given digital values. The algorithm of this method was discussed in detail in this paper. The computations showed that the proposed method was simple and the number of measurements needed was reduced greatly while keeping a very high computation precision. This method can be used as a colorimetrical characterization model.
NASA Astrophysics Data System (ADS)
Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo
2018-03-01
Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.
Use of the Microbial Growth Curve in Postantibiotic Effect Studies of Legionella pneumophila
Smith, Raymond P.; Baltch, Aldona L.; Michelsen, Phyllis B.; Ritz, William J.; Alteri, Richard
2003-01-01
Using the standard Craig and Gudmundsson method (W. A. Craig and S. Gudmundsson, p. 296-329, in V. Lorian, ed., Antibiotics in Laboratory Medicine, 1996) as a guideline for determination of postantibiotic effects (PAE), we studied a large series of growth curves for two strains of Legionella pneumophila. We found that the intensity of the PAE was best determined by using a statistically fitted line over hours 3 to 9 following antibiotic removal. We further determined the PAE duration by using a series of observations of the assay interval from hours 3 to 24. We determined that inoculum reduction was not necessarily the only predictor of the PAE but that the PAE was subject to the type and dose of the drug used in the study. In addition, there was a variation between strains. Only levofloxacin at five and ten times the minimum inhibitory concentration (MIC) resulted in a PAE duration of 4 to 10 h for both strains of L. pneumophila tested. Ciprofloxacin at five and ten times the MIC and azithromycin at ten times the MIC caused a PAE for one strain only. No PAE could be demonstrated for either strain with erythromycin or doxycycline. Using the presently described method of measuring PAE for L. pneumophila, we were able to detect differences in PAE which were dependent upon the L. pneumophila strain, the antibiotic tested, and the antibiotic concentration. We suggest the use of mathematically fitted curves for comparison of bacterial growth in order to measure PAE for L. pneumophila. PMID:12604545
Physical fitness reference standards in fibromyalgia: The al-Ándalus project.
Álvarez-Gallardo, I C; Carbonell-Baeza, A; Segura-Jiménez, V; Soriano-Maldonado, A; Intemann, T; Aparicio, V A; Estévez-López, F; Camiletti-Moirón, D; Herrador-Colmenero, M; Ruiz, J R; Delgado-Fernández, M; Ortega, F B
2017-11-01
We aimed (1) to report age-specific physical fitness levels in people with fibromyalgia of a representative sample from Andalusia; and (2) to compare the fitness levels of people with fibromyalgia with non-fibromyalgia controls. This cross-sectional study included 468 (21 men) patients with fibromyalgia and 360 (55 men) controls. The fibromyalgia sample was geographically representative from southern Spain. Physical fitness was assessed with the Senior Fitness Test battery plus the handgrip test. We applied the Generalized Additive Model for Location, Scale and Shape to calculate percentile curves for women and fitted mean curves using a linear regression for men. Our results show that people with fibromyalgia reached worse performance in all fitness tests than controls (P < 0.001) in all age ranges (P < 0.001). This study provides a comprehensive description of age-specific physical fitness levels among patients with fibromyalgia and controls in a large sample of patients with fibromyalgia from southern of Spain. Physical fitness levels of people with fibromyalgia from Andalusia are very low in comparison with age-matched healthy controls. This information could be useful to correctly interpret physical fitness assessments and helping health care providers to identify individuals at risk for losing physical independence. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Visual Tracking Using 3D Data and Region-Based Active Contours
2016-09-28
adaptive control strategies which explicitly take uncertainty into account. Filtering methods ranging from the classical Kalman filters valid for...linear systems to the much more general particle filters also fit into this framework in a very natural manner. In particular, the particle filtering ...the number of samples required for accurate filtering increases with the dimension of the system noise. In our approach, we approximate curve
VizieR Online Data Catalog: Catalog of Kepler flare stars (Van Doorsselaere+, 2017)
NASA Astrophysics Data System (ADS)
van Doorsselaere, T.; Shariati, H.; Debosscher, J.
2017-11-01
With an automated detection method, we have identified stellar flares in the long cadence observations of Kepler during quarter 15. We list each flare time for the respective Kepler objects. Furthermore, we list the flare amplitude and decay time after fitting the flare light curve with an exponential decay. Flare start times in long cadence data of Kepler during quarter 15. (1 data file).
Exploring the optimum step size for defocus curves.
Wolffsohn, James S; Jinabhai, Amit N; Kingsnorth, Alec; Sheppard, Amy L; Naroo, Shehzad A; Shah, Sunil; Buckhurst, Phillip; Hall, Lee A; Young, Graeme
2013-06-01
To evaluate the effect of reducing the number of visual acuity measurements made in a defocus curve on the quality of data quantified. Midland Eye, Solihull, United Kingdom. Evaluation of a technique. Defocus curves were constructed by measuring visual acuity on a distance logMAR letter chart, randomizing the test letters between lens presentations. The lens powers evaluated ranged between +1.50 diopters (D) and -5.00 D in 0.50 D steps, which were also presented in a randomized order. Defocus curves were measured binocularly with the Tecnis diffractive, Rezoom refractive, Lentis rotationally asymmetric segmented (+3.00 D addition [add]), and Finevision trifocal multifocal intraocular lenses (IOLs) implanted bilaterally, and also for the diffractive IOL and refractive or rotationally asymmetric segmented (+3.00 D and +1.50 D adds) multifocal IOLs implanted contralaterally. Relative and absolute range of clear-focus metrics and area metrics were calculated for curves fitted using 0.50 D, 1.00 D, and 1.50 D steps and a near add-specific profile (ie, distance, half the near add, and the full near-add powers). A significant difference in simulated results was found in at least 1 of the relative or absolute range of clear-focus or area metrics for each of the multifocal designs examined when the defocus-curve step size was increased (P<.05). Faster methods of capturing defocus curves from multifocal IOL designs appear to distort the metric results and are therefore not valid. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
A first-principles model for estimating the prevalence of annoyance with aircraft noise exposure.
Fidell, Sanford; Mestre, Vincent; Schomer, Paul; Berry, Bernard; Gjestland, Truls; Vallet, Michel; Reid, Timothy
2011-08-01
Numerous relationships between noise exposure and transportation noise-induced annoyance have been inferred by curve-fitting methods. The present paper develops a different approach. It derives a systematic relationship by applying an a priori, first-principles model to the findings of forty three studies of the annoyance of aviation noise. The rate of change of annoyance with day-night average sound level (DNL) due to aircraft noise exposure was found to closely resemble the rate of change of loudness with sound level. The agreement of model predictions with the findings of recent curve-fitting exercises (cf. Miedma and Vos, 1998) is noteworthy, considering that other analyses have relied on different analytic methods and disparate data sets. Even though annoyance prevalence rates within individual communities consistently grow in proportion to duration-adjusted loudness, variability in annoyance prevalence rates across communities remains great. The present analyses demonstrate that 1) community-specific differences in annoyance prevalence rates can be plausibly attributed to the joint effect of acoustic and non-DNL related factors and (2) a simple model can account for the aggregate influences of non-DNL related factors on annoyance prevalence rates in different communities in terms of a single parameter expressed in DNL units-a "community tolerance level."
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
A new experimental correlation for non-Newtonian behavior of COOH-DWCNTs/antifreeze nanofluid
NASA Astrophysics Data System (ADS)
Izadi, Farhad; Ranjbarzadeh, Ramin; Kalbasi, Rasool; Afrand, Masoud
2018-04-01
In this paper, the rheological behavior of nano-antifreeze consisting of 50%vol. water, 50%vol. ethylene glycol and different quantities of functionalized double walled carbon nanotubes has been investigated experimentally. Initially, nano-antifreeze samples were prepared with solid volume fractions of 0.05, 0.1, 0.2, 0.4, 0.6, 0.8 and 1% using two-step method. Then, the dynamic viscosity of the nano-antifreeze samples was measured at different shear rates and temperatures. At this stage, the results showed that base fluid had the Newtonian behavior, while the behavior of all nano-antifreeze samples was non-Newtonian. Since the behavior of the samples was similar to power law model, it was attempted to find the constants of this model including consistency index and power law index. Therefore, using the measured viscosity and shear rates, consistency index and power law index were obtained by curve-fitting method. The obtained values showed that consistency index amplified with increasing volume fraction, while reduced with enhancing temperature. Besides, the obtained values for power law index were less than 1 for all samples which means shear thinning behavior. Lastly, new correlations were suggested to estimate the consistency index and power law index using curve-fitting.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
Nonlinear dynamic analysis and optimal trajectory planning of a high-speed macro-micro manipulator
NASA Astrophysics Data System (ADS)
Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Zhao, Xiao-wei
2017-09-01
This paper reports the nonlinear dynamic modeling and the optimal trajectory planning for a flexure-based macro-micro manipulator, which is dedicated to the large-scale and high-speed tasks. In particular, a macro- micro manipulator composed of a servo motor, a rigid arm and a compliant microgripper is focused. Moreover, both flexure hinges and flexible beams are considered. By combining the pseudorigid-body-model method, the assumed mode method and the Lagrange equation, the overall dynamic model is derived. Then, the rigid-flexible-coupling characteristics are analyzed by numerical simulations. After that, the microscopic scale vibration excited by the large-scale motion is reduced through the trajectory planning approach. Especially, a fitness function regards the comprehensive excitation torque of the compliant microgripper is proposed. The reference curve and the interpolation curve using the quintic polynomial trajectories are adopted. Afterwards, an improved genetic algorithm is used to identify the optimal trajectory by minimizing the fitness function. Finally, the numerical simulations and experiments validate the feasibility and the effectiveness of the established dynamic model and the trajectory planning approach. The amplitude of the residual vibration reduces approximately 54.9%, and the settling time decreases 57.1%. Therefore, the operation efficiency and manipulation stability are significantly improved.
Nongaussian distribution curve of heterophorias among children.
Letourneau, J E; Giroux, R
1991-02-01
The purpose of this study was to measure the distribution curve of horizontal and vertical phorias among children. Kolmogorov-Smirnov goodness of fit tests showed that these distribution curves were not Gaussian among (N = 2048) 6- to 13-year-old children. The distribution curve of horizontal phoria at far and of vertical phorias at far and at near were leptokurtic; the distribution curve of horizontal phoria at near was platykurtic. No variation of the distribution curve of heterophorias with age was observed. Comparisons of any individual findings with the general distribution curve should take the nonGaussian distribution curve of heterophorias into account.
Kohara, Aiko; Han, ChangWan; Kwon, HaeJin; Kohzuki, Masahiro
2015-11-01
The improvement of the quality of life (QOL) of children with disabilities has been considered important. Therefore, the Special Needs Education Assessment Tool (SNEAT) was developed based on the concept of QOL to objectively evaluate the educational outcome of children with disabilities. SNEAT consists of 11 items in three domains: physical functioning, mental health, and social functioning. This study aimed to verify the reliability and construct validity of SNEAT using 93 children collected from the classes on independent activities of daily living for children with disabilities in Okinawa Prefecture between October and November 2014. Survey data were collected in a longitudinal prospective cohort study. The reliability of SNEAT was verified via the internal consistency method and the test-pretest method; both the coefficient of Cronbach's α and the intra-class correlation coefficient were over 0.7. The validity of SNEAT was also verified via one-way repeated-measures ANOVA and the latent growth curve model. The scores of all the items and domains and the total scores obtained from one-way repeated-measures ANOVA were the same as the predicted scores. SNEAT is valid based on its goodness-of-fit values obtained using the latent growth curve model, where the values of comparative fit index (0.983) and root mean square error of approximation (0.062) were within the goodness-of-fit range. These results indicate that SNEAT has high reliability and construct validity and may contribute to improve QOL of children with disabilities in the classes on independent activities of daily living for children with disabilities.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.
2012-08-01
This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.
Horn, P L; Neil, H L; Paul, L J; Marriott, P
2010-11-01
Age validation of bluenose Hyperoglyphe antarctica was sought using the independent bomb chronometer procedure. Radiocarbon ((14) C) levels were measured in core micro-samples from 12 otoliths that had been aged using a zone count method. The core (14) C measurement for each fish was compared with the value on a surface water reference curve for the calculated birth year of the fish. There was good agreement, indicating that the line-count ageing method described here is not substantially biased. A second micro-sample was also taken near the edge of nine of the otolith cross-sections to help define a bomb-carbon curve for waters deeper than 200-300 m. There appears to be a 10 to 15 year lag in the time it takes the (14) C to reach the waters where adult H. antarctica are concentrated. The maximum estimated age of this species was 76 years, and females grow significantly larger than males. Von Bertalanffy growth curves were estimated, and although they fit the available data reasonably well, the lack of aged juvenile fish results in the K and t(0) parameters being biologically meaningless. Consequently, curves that are likely to better represent population growth were estimated by forcing t(0) to be -0·5. © 2010 NIWA. Journal of Fish Biology © 2010 The Fisheries Society of the British Isles.
Fourier plane modeling of the jet in the galaxy M81
NASA Astrophysics Data System (ADS)
Ramessur, Arvind; Bietenholz, Michael F.; Leeuw, Lerothodi L.; Bartel, Norbert
2015-03-01
The nearby spiral galaxy M81 has a low-luminosity Active Galactic Nucleus in its center with a core and a one-sided curved jet, dubbed M81*, that is barely resolved with VLBI. To derive basic parameters such as the length of the jet, its orientation and curvature, the usual method of model-fitting with point sources and elliptical Gaussians may not always be the most appropriate one. We are developing Fourier-plane models for such sources, in particular an asymmetric triangle model to fit the extensive set of VLBI data of M81* in the u-v plane. This method may have an advantage over conventional ones in extracting information close to the resolution limit to provide us with a more comprehensive picture of the structure and evolution of the jet. We report on preliminary results.
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
STACCATO: a novel solution to supernova photometric classification with biased training sets
NASA Astrophysics Data System (ADS)
Revsbech, E. A.; Trotta, R.; van Dyk, D. A.
2018-01-01
We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. - a diffusion map combined with a random forest classifier - to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.
NASA Astrophysics Data System (ADS)
Lu, Shan; Zhang, Hanmo
2016-01-01
To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.
NASA Astrophysics Data System (ADS)
Ong, Vincent K. S.
1998-04-01
The extraction of diffusion length and surface recombination velocity in a semiconductor with the use of an electron beam induced current line scan has traditionally been done by fitting the line scan into complicated theoretical equations. It was recently shown that a much simpler equation is sufficient for the extraction of diffusion length. The linearization coefficient is the only variable that is needed to be adjusted in the curve fitting process. However, complicated equations are still necessary for the extraction of surface recombination velocity. It is shown in this article that it is indeed possible to extract surface recombination velocity with a simple equation, using only one variable, the linearization coefficient. An intuitive feel for the reason behind the method was discussed. The accuracy of the method was verified with the use of three-dimensional computer simulation, and was found to be even slightly better than that of the best existing method.
Three-dimensional trend mapping from wire-line logs
Doveton, J.H.; Ke-an, Z.
1985-01-01
Mapping of lithofacies and porosities of stratigraphic units is complicated because these properties vary in three dimensions. The method of moments was proposed by Krumbein and Libby (1957) as a technique to aid in resolving this problem. Moments are easily computed from wireline logs and are simple statistics which summarize vertical variation in a log trace. Combinations of moment maps have proved useful in understanding vertical and lateral changes in lithology of sedimentary rock units. Although moments have meaning both as statistical descriptors and as mechanical properties, they also define polynomial curves which approximate lithologic changes as a function of depth. These polynomials can be fitted by least-squares methods, partitioning major trends in rock properties from finescale fluctuations. Analysis of variance yields the degree of fit of any polynomial and measures the proportion of vertical variability expressed by any moment or combination of moments. In addition, polynomial curves can be differentiated to determine depths at which pronounced expressions of facies occur and to determine the locations of boundaries between major lithologic subdivisions. Moments can be estimated at any location in an area by interpolating from log moments at control wells. A matrix algebra operation then converts moment estimates to coefficients of a polynomial function which describes a continuous curve of lithologic variation with depth. If this procedure is applied to a grid of geographic locations, the result is a model of variability in three dimensions. Resolution of the model is determined largely by number of moments used in its generation. The method is illustrated with an analysis of lithofacies in the Simpson Group of south-central Kansas; the three-dimensional model is shown as cross sections and slice maps. In this study, the gamma-ray log is used as a measure of shaliness of the unit. However, the method is general and can be applied, for example, to suites of neutron, density, or sonic logs to produce three-dimensional models of porosity in reservoir rocks. ?? 1985 Plenum Publishing Corporation.
A simple and robust method for artifacts correction on X-ray microtomography images
NASA Astrophysics Data System (ADS)
Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Irina, Bayuk; Kirill, Gerke
2017-04-01
X-ray microtomography images of rock material often have some kinds of distortion due to different reasons such as X-ray attenuation, beam hardening, irregularity of distribution of liquid/solid phases. Several kinds of distortion can arise from further image processing and stitching of images from different measurements. Beam-hardening is a well-known and studied distortion which is relative easy to be described, fitted and corrected using a number of equations. However, this is not the case for other grey scale intensity distortions. Shading by irregularity of distribution of liquid phases, incorrect scanner operating/parameters choosing, as well as numerous artefacts from mathematical reconstructions from projections, including stitching from separate scans cannot be described using single mathematical model. To correct grey scale intensities on large 3D images we developed a package Traditional method for removing the beam hardening [1] has been modified in order to find the center of distortion. The main contribution of this work is in development of a method for arbitrary image correction. This method is based on fitting the distortion by Bezier curve using image histogram. The distortion along the image is represented by a number of Bezier curves and one base line that characterizes the natural distribution of gray value along the image. All of these curves are set manually by the operator. We have tested our approaches on different X-ray microtomography images of porous media. Arbitrary correction removes all principal distortion. After correction the images has been binarized with subsequent pore-network extracted. Equal distribution of pore-network elements along the image was the criteria to verify the proposed technique to correct grey scale intensities. [1] Iassonov, P. and Tuller, M., 2010. Application of segmentation for correction of intensity bias in X-ray computed tomography images. Vadose Zone Journal, 9(1), pp.187-191.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viraganathan, H; Jiang, R; Chow, J
Purpose: We proposed a method to predict the change of dose-volume histogram (DVH) for PTV due to patient weight loss in prostate volumetric modulated arc therapy (VMAT). This method is based on a pre-calculated patient dataset and DVH curve fitting using the Gaussian error function (GEF). Methods: Pre-calculated dose-volume data from patients having weight loss in prostate VMAT was employed to predict the change of PTV coverage due to reduced depth in external contour. The effect of patient weight loss in treatment was described by a prostate dose-volume factor (PDVF), which was evaluated by the prostate PTV. Along with themore » PDVF, the GEF was used to fit into the DVH curve for the PTV. To predict a new DVH due to weight loss, parameters from the GEF describing the shape of DVH curve were determined. Since the parameters were related to the PDVF as per the specific reduced depth, we could first predict the PDVF at a reduced depth based on the prostate size from the pre-calculated dataset. Then parameters of the GEF could be determined from the PDVF to plot the new DVH for the PTV corresponding to the reduced depth. Results: A MATLAB program was built basing on the patient dataset with different prostate sizes. We input data of the prostate size and reduced depth of the patient into the program. The program then calculated the PDVF and DVH for the PTV considering the patient weight loss. The program was verified by different patient cases with various reduced depths. Conclusion: Our method can estimate the change of DVH for the PTV due to patient weight loss quickly without CT rescan and replan. This would help the radiation staff to predict the change of PTV coverage, when patient’s external contour reduced in prostate VMAT.« less
Round Robin Test of Residual Resistance Ratio of Nb$$_3$$Sn Composite Superconductors
Matsushita, Teruo; Otabe, Edmund Soji; Kim, Dong Ho; ...
2017-12-07
A round robin test of residual resistance ratio (RRR) was performed for Nb 3Sn composite superconductors prepared by internal tin method by six institutes with the international standard test method described in IEC 61788-4. It was found that uncertainty mainly resulted from determination of the cryogenic resistance from the intersection of two straight lines drawn to fit the voltage vs. temperature curve around the resistive transition. As a result, the measurement clarified that RRR can be measured with expanded uncertainty not larger than 5% with the coverage factor 2 by using this test method.
Round Robin Test of Residual Resistance Ratio of Nb$$_3$$Sn Composite Superconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsushita, Teruo; Otabe, Edmund Soji; Kim, Dong Ho
A round robin test of residual resistance ratio (RRR) was performed for Nb 3Sn composite superconductors prepared by internal tin method by six institutes with the international standard test method described in IEC 61788-4. It was found that uncertainty mainly resulted from determination of the cryogenic resistance from the intersection of two straight lines drawn to fit the voltage vs. temperature curve around the resistive transition. As a result, the measurement clarified that RRR can be measured with expanded uncertainty not larger than 5% with the coverage factor 2 by using this test method.
NASA Astrophysics Data System (ADS)
Han, Ming
In this dissertation, detailed and systematic theoretical and experimental study of low-finesse extrinsic Fabry-Perot interferometric (EFPI) fiber optic sensors together with their signal processing methods for white-light systems are presented. The work aims to provide a better understanding of the operational principle of EFPI fiber optic sensors, and is useful and important in the design, optimization, fabrication and application of single mode fiber(SMF) EFPI (SMF-EFPI) and multimode fiber (MMF) EFPI (MMF-EFPI) sensor systems. The cases for SMF-EFPI and MMF-EFPI sensors are separately considered. In the analysis of SMF-EFPI sensors, the light transmitted in the fiber is approximated by a Gaussian beam and the obtained spectral transfer function of the sensors includes an extra phase shift due to the light coupling in the fiber end-face. This extra phase shift has not been addressed by previous researchers and is of great importance for high accuracy and high resolution signal processing of white-light SMF-EFPI systems. Fringe visibility degradation due to gap-length increase and sensor imperfections is studied. The results indicate that the fringe visibility of a SMF-EFPI sensor is relatively insensitive to the gap-length change and sensor imperfections. Based on the spectral fringe pattern predicated by the theory of SMF-EFPI sensors, a novel curve fitting signal processing method (Type 1 curve-fitting method) is presented for white-light SMF-EFPI sensor systems. Other spectral domain signal processing methods including the wavelength-tracking, the Type 2-3 curve fitting, Fourier transform, and two-point interrogation methods are reviewed and systematically analyzed. Experiments were carried out to compare the performances of these signal processing methods. The results have shown that the Type 1 curve fitting method achieves high accuracy, high resolution, large dynamic range, and the capability of absolute measurement at the same time, while others either have less resolution, or are not capable of absolute measurement. Previous mathematical models for MMF-EFPI sensors are all based on geometric optics; therefore their applications have many limitations. In this dissertation, a modal theory is developed that can be used in any situations and is more accurate. The mathematical description of the spectral fringes of MMF-EFPI sensors is obtained by the modal theory. Effect on the fringe visibility of system parameters, including the sensor head structure, the fiber parameters, and the mode power distribution in the MMF of the MMF-EFPI sensors, is analyzed. Experiments were carried out to validate the theory. Fundamental mechanism that causes the degradation of the fringe visibility in MMF-EFPI sensors are revealed. It is shown that, in some situations at which the fringe visibility is important and difficult to achieve, a simple method of launching the light into the MMF-EFPI sensor system from the output of a SMF could be used to improve the fringe visibility and to ease the fabrication difficulties of MMF-EFPI sensors. Signal processing methods that are well-understood in white-light SMF-EFPI sensor systems may exhibit new aspects when they are applied to white-light MMF-EFPI sensor systems. This dissertation reveals that the variations of mode power distribution (MPD) in the MMF could cause phase variations of the spectral fringes from a MMF-EFPI sensor and introduce measurement errors for a signal processing method in which the phase information is used. This MPD effect on the wavelength-tracking method in white-light MMF-EFPI sensors is theoretically analyzed. The fringe phases changes caused by MPD variations were experimentally observed and thus the MFD effect is validated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, S.; Gezari, S.; Heinis, S.
2015-03-20
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.« less
Estimating regional centile curves from mixed data sources and countries.
van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J
2009-10-15
Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.
Microtremors study applying the SPAC method in Colima state, Mexico.
NASA Astrophysics Data System (ADS)
Vázquez Rosas, R.; Aguirre González, J.; Mijares Arellano, H.
2007-05-01
One of the main parts of seismic risk studies is to determine the site effect. This can be estimated by means of the microtremors measurements. From the H/V spectral ratio (Nakamura, 1989), the predominant period of the site can be estimated. Although the predominant period by itself can not represent the site effect in a wide range of frequencies and doesn't provide information of the stratigraphy. The SPAC method (Spatial Auto-Correlation Method, Aki 1957), on the other hand, is useful to estimate the stratigraphy of the site. It is based on the simultaneous recording of microtremors in several stations deployed in an instrumental array. Through the spatial autocorrelation coefficient computation, the Rayleigh wave dispersion curve can be cleared. Finally the stratigraphy model (thickness, S and P wave velocity, and density of each layer) is estimated by fitting the theoretical dispersion curve with the observed one. The theoretical dispersion curve is initially computed using a proposed model. That model is modified several times until the theoretical curve fit the observations. This method requires of a minimum of three stations where the microtremors are observed simultaneously in all the stations. We applied the SPAC method to six sites in Colima state, Mexico. Those sites are Santa Barbara, Cerro de Ortega, Tecoman, Manzanillo and two in Colima city. Totally 16 arrays were carried out using equilateral triangles with different apertures with a minimum of 5 m and a maximum of 60 m. For recording microtremors we used short period (5 seconds) velocity type vertical sensors connected to a K2 (Kinemetrics) acquisition system. We could estimate the velocities of the most superficial layers reaching different depths in each site. For Santa Bárbara site the exploration depth was about 30 m, for Tecoman 12 m, for Manzanillo 35 m, for Cerro de Ortega 68 m, and the deepest site exploration was obtained in Colima city with a depth of around 73 m. The S wave velocities fluctuate between 230 m/s and 420 m/s for the most superficial layer. It means that, in general, the most superficial layers are quite competent. The superficial layer with smaller S wave velocity was observed in Tecoman, while that of largest S wave velocity was observed in Cerro de Ortega. Our estimations are consistent with down-hole velocity records obtained in Santa Barbara by previous studies.
UTM: Universal Transit Modeller
NASA Astrophysics Data System (ADS)
Deeg, Hans J.
2014-12-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi
2017-12-01
Identifying the viscous properties of the plantar soft tissue is crucial not only for understanding the dynamic interaction of the foot with the ground during locomotion, but also for development of improved footwear products and therapeutic footwear interventions. In the present study, the viscous and hyperelastic material properties of the plantar soft tissue were experimentally identified using a spherical indentation test and an analytical contact model of the spherical indentation test. Force-relaxation curves of the heel pads were obtained from the indentation experiment. The curves were fit to the contact model incorporating a five-element Maxwell model to identify the viscous material parameters. The finite element method with the experimentally identified viscoelastic parameters could successfully reproduce the measured force-relaxation curves, indicating the material parameters were correctly estimated using the proposed method. Although there are some methodological limitations, the proposed framework to identify the viscous material properties may facilitate the development of subject-specific finite element modeling of the foot and other biological materials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Daniel, D. Joseph; Ramasamy, P.; Ramaseshan, R.; Kim, H. J.; Kim, Sunghwan; Bhagavannarayana, G.; Cheon, Jong-Kyu
2017-10-01
Polycrystalline compounds of LiBaF3 were synthesized using conventional solid state reaction route and the phase purity was confirmed using powder X-ray diffraction technique. Using vertical Bridgman technique single crystal was grown from melt. Rocking curve measurements have been carried out to study the structural perfection of the grown crystal. The single peak of diffraction curve clearly reveals that the grown crystal was free from the structural grain boundaries. The low temperature thermoluminescence of the X-ray irradiated sample has been analyzed and found four distinguishable peaks having maximum temperatures at 18, 115, 133 and 216 K. Activation energy (E) and frequency factor (s) for the individual peaks have been studied using Peak shape method and the computerized curve fitting method combining with the Tmax- TStop procedure. Nanoindentation technique was employed to study the mechanical behaviour of the crystal. The indentation modulus and Vickers hardness of the grown crystal have values of 135.15 GPa and 680.81 respectively, under the maximum indentation load of 10 mN.
Know the Planet, Know the Star: Precise Stellar Parameters with Kepler
NASA Astrophysics Data System (ADS)
Sandford, Emily; Kipping, David M.
2017-01-01
The Kepler space telescope has revolutionized exoplanetary science with unprecedentedly precise photometric measurements of the light curves of transiting planets. In addition to information about the planet and its orbit, encoded in each Kepler transiting planet light curve are certain properties of the host star, including the stellar density and the limb darkening profile. For planets with strong prior constraints on orbital eccentricity (planets to which we refer as “stellar anchors”), we may measure these stellar properties directly from the light curve. This method promises to aid greatly in the characterization of transiting planet host stars targeted by the upcoming NASA TESS mission and any long-period, singly-transiting planets discovered in the same systems. Using Bayesian inference, we fit a transit model, including a nonlinear limb darkening law, to a large sample of transiting planet hosts to measure their stellar properties. We present the results of our analysis, including posterior stellar density distributions for each stellar host, and show how the method yields superior precision to literature stellar properties in the majority of cases studied.
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
Scatter of X-rays on polished surfaces
NASA Technical Reports Server (NTRS)
Hasinger, G.
1981-01-01
In investigating the dispersion properties of telescope mirrors used in X-ray astronomy, the slight scattering characteristics of X-ray radiation by statistically rough surfaces were examined. The mathematics and geometry of scattering theory are described. The measurement test assembly is described and results of measurements on samples of plane mirrors are given. Measurement results are evaluated. The direct beam, the convolution of the direct beam and the scattering halo, curve fitting by the method of least squares, various autocorrelation functions, results of the fitting procedure for small scattering, and deviations in the kernel of the scattering distribution are presented. A procedure for quality testing of mirror systems through diagnosis of rough surfaces is described.
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
Quantitative photoplethysmography: Lambert-Beer law or inverse function incorporating light scatter.
Cejnar, M; Kobler, H; Hunyor, S N
1993-03-01
Finger blood volume is commonly determined from measurement of infra-red (IR) light transmittance using the Lambert-Beer law of light absorption derived for use in non-scattering media, even when such transmission involves light scatter around the phalangeal bone. Simultaneous IR transmittance and finger volume were measured over the full dynamic range of vascular volumes in seven subjects and outcomes compared with data fitted according to the Lambert-Beer exponential function and an inverse function derived for light attenuation by scattering materials. Curves were fitted by the least-squares method and goodness of fit was compared using standard errors of estimate (SEE). The inverse function gave a better data fit in six of the subjects: mean SEE 1.9 (SD 0.7, range 0.7-2.8) and 4.6 (2.2, 2.0-8.0) respectively (p < 0.02, paired t-test). Thus, when relating IR transmittance to blood volume, as occurs in the finger during measurements of arterial compliance, an inverse function derived from a model of light attenuation by scattering media gives more accurate results than the traditional exponential fit.
Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan
2018-02-01
Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.
NASA Astrophysics Data System (ADS)
Oh, Jun-Seok; Szili, Endre J.; Ogawa, Kotaro; Short, Robert D.; Ito, Masafumi; Furuta, Hiroshi; Hatta, Akimitsu
2018-01-01
Plasma-activated water (PAW) is receiving much attention in biomedical applications because of its reported potent bactericidal properties. Reactive oxygen and nitrogen species (RONS) that are generated in water upon plasma exposure are thought to be the key components in PAW that destroy bacterial and cancer cells. In addition to developing applications for PAW, it is also necessary to better understand the RONS chemistry in PAW in order to tailor PAW to achieve a specific biological response. With this in mind, we previously developed a UV-vis spectroscopy method using an automated curve fitting routine to quantify the changes in H2O2, NO2 -, NO3 - (the major long-lived RONS in PAW), and O2 concentrations. A major advantage of UV-vis is that it can take multiple measurements during plasma activation. We used the UV-vis procedure to accurately quantify the changes in the concentrations of these RONS and O2 in PAW. However, we have not yet provided an in-depth commentary of how we perform the curve fitting procedure or its implications. Therefore, in this study, we provide greater detail of how we use the curve fitting routine to derive the RONS and O2 concentrations in PAW. PAW was generated by treatment with a helium plasma jet. In addition, we employ UV-vis to study how the plasma jet exposure time and treatment distance affect the RONS chemistry and amount of O2 dissolved in PAW. We show that the plasma jet exposure time principally affects the total RONS concentration, but not the relative ratios of RONS, whereas the treatment distance affects both the total RONS concentration and the relative RONS concentrations.
NASA Astrophysics Data System (ADS)
Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan
2018-02-01
Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.
Parameterization of the Age-Dependent Whole Brain Apparent Diffusion Coefficient Histogram
Batra, Marion; Nägele, Thomas
2015-01-01
Purpose. The distribution of apparent diffusion coefficient (ADC) values in the brain can be used to characterize age effects and pathological changes of the brain tissue. The aim of this study was the parameterization of the whole brain ADC histogram by an advanced model with influence of age considered. Methods. Whole brain ADC histograms were calculated for all data and for seven age groups between 10 and 80 years. Modeling of the histograms was performed for two parts of the histogram separately: the brain tissue part was modeled by two Gaussian curves, while the remaining part was fitted by the sum of a Gaussian curve, a biexponential decay, and a straight line. Results. A consistent fitting of the histograms of all age groups was possible with the proposed model. Conclusions. This study confirms the strong dependence of the whole brain ADC histograms on the age of the examined subjects. The proposed model can be used to characterize changes of the whole brain ADC histogram in certain diseases under consideration of age effects. PMID:26609526
Zhang, Sa; Li, Zhou; Xin, Xue-Gang
2017-12-20
To achieve differential diagnosis of normal and malignant gastric tissues based on discrepancies in their dielectric properties using support vector machine. The dielectric properties of normal and malignant gastric tissues at the frequency ranging from 42.58 to 500 MHz were measured by coaxial probe method, and the Cole?Cole model was used to fit the measured data. Receiver?operating characteristic (ROC) curve analysis was used to evaluate the discrimination capability with respect to permittivity, conductivity, and Cole?Cole fitting parameters. Support vector machine was used for discriminating normal and malignant gastric tissues, and the discrimination accuracy was calculated using k?fold cross? The area under the ROC curve was above 0.8 for permittivity at the 5 frequencies at the lower end of the measured frequency range. The combination of the support vector machine with the permittivity at all these 5 frequencies combined achieved the highest discrimination accuracy of 84.38% with a MATLAB runtime of 3.40 s. The support vector machine?assisted diagnosis is feasible for human malignant gastric tissues based on the dielectric properties.
Correlation of basic TL, OSL and IRSL properties of ten K-feldspar samples of various origins
NASA Astrophysics Data System (ADS)
Sfampa, I. K.; Polymeris, G. S.; Pagonis, V.; Theodosoglou, E.; Tsirliganis, N. C.; Kitis, G.
2015-09-01
Feldspars stand among the most widely used minerals in dosimetric methods of dating using thermoluminescence (TL), optically stimulated luminescence (OSL) and infrared stimulated luminescence (IRSL). Having very good dosimetric properties, they can in principle contribute to the dating of every site of archaeological and geological interest. The present work studies basic properties of ten naturally occurring K-feldspar samples belonging to three feldspar species, namely sanidine, orthoclase and microcline. The basic properties studied are (a) the influence of blue light and infrared stimulation on the thermoluminescence glow-curves, (b) the growth of OSL, IRSL, residual TL and TL-loss as a function of OSL and IRSL bleaching time and (c) the correlation between the OSL and IRSL signals and the energy levels responsible for the TL glow-curve. All experimental data were fitted using analytical expressions derived from a recently developed tunneling recombination model. The results show that the analytical expressions provide excellent fits to all experimental results, thus verifying the tunneling recombination mechanism in these materials and providing valuable information about the concentrations of luminescence centers.
Temperature dependence of luminescence behavior in Er3+-doped BaY2F8 single crystal
NASA Astrophysics Data System (ADS)
Wang, Shuai; Ruan, Yongfeng; Tsuboi, Taiju; Tong, Hongshuang; Wang, Youfa; Zhang, Shouchao
2013-12-01
BaY2F8 single crystals doped with Er3+ ions have been grown by the temperature gradient method. The absorption, excitation and emission spectra for Er3+-doped BaY2F8 crystals were measured at room temperature (297 K) and 12 K. The effect of temperature on the luminescence intensity and effective bandwidth was investigated in the range of 12-297 K. The temperature dependence of the fluorescence intensity ratio (FIR) for the 522 nm emission (2H11/2→4I15/2 transition) and the 552 nm emission (4S3/2→4I15/2 transition) was also studied in the range of 12-297 K. Based on the fitting FIR curve, the value of the constant term B (2.25) was obtained. The fitting FIR curve and FIR equation may have a potential application in the temperature measurement. In addition, the up-conversion spectrum at room temperature was recorded under excitation of 980 nm and the up-conversion mechanism was analyzed in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamio, Y; Bouchard, H
2014-06-15
Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less
NASA Astrophysics Data System (ADS)
Singh, Japneet; Gaskell, Martin; Gill, Jake
2017-01-01
We use mid-IR (WIRE), optical (SDSS), and ultraviolet (GALEX) photometry of over 80,000 AGNs to derive mean attenuation curves from the optical to the rest frame extreme ultraviolet (EUV) for (i) “normal” AGN dust dominating the optical reddening of AGNs and (ii) “BAL dust” - the dust causing the additional extinction in AGNs observed to have broad absorption lines (BALs). Our method confirms that the attenuation curve of “normal” AGN dust is flat in the ultraviolet, as found by Gaskell et al. (2004). In striking contrast to this, the attenuation curve for BAL dust is well fit by a steeply-rising, SMC-like curve. We confirm the shape of the theoretical Weingartner & Draine (2001) SMC curve out to 700 Angstroms but the drop in attenuation to still shorter wavelengths (400 Angstroms) seems to be less than predicted. We find identical attenuation curves for high-ionization and low-ionization BALQSOs. We suggest that attenuation curves appearing to be steeper than the SMC are due to differences in underlying spectra and partial covering by BAL dust. This work was This work was performed under the auspices of the Science Internship Program (SIP) of the University of California at Santa Cruz performed under the auspices of the Science Internship Program (SIP) of the University of California at Santa Cruz.
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
NASA Technical Reports Server (NTRS)
Welker, Jean Edward
1991-01-01
Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.
Prediction of Fitness to Drive in Patients with Alzheimer's Dementia
Piersma, Dafne; Fuermaier, Anselm B. M.; de Waard, Dick; Davidse, Ragnhild J.; de Groot, Jolieke; Doumen, Michelle J. A.; Bredewoud, Ruud A.; Claesen, René; Lemstra, Afina W.; Vermeeren, Annemiek; Ponds, Rudolf; Verhey, Frans; Brouwer, Wiebo H.; Tucha, Oliver
2016-01-01
The number of patients with Alzheimer’s disease (AD) is increasing and so is the number of patients driving a car. To enable patients to retain their mobility while at the same time not endangering public safety, each patient should be assessed for fitness to drive. The aim of this study is to develop a method to assess fitness to drive in a clinical setting, using three types of assessments, i.e. clinical interviews, neuropsychological assessment and driving simulator rides. The goals are (1) to determine for each type of assessment which combination of measures is most predictive for on-road driving performance, (2) to compare the predictive value of clinical interviews, neuropsychological assessment and driving simulator evaluation and (3) to determine which combination of these assessments provides the best prediction of fitness to drive. Eighty-one patients with AD and 45 healthy individuals participated. All participated in a clinical interview, and were administered a neuropsychological test battery and a driving simulator ride (predictors). The criterion fitness to drive was determined in an on-road driving assessment by experts of the CBR Dutch driving test organisation according to their official protocol. The validity of the predictors to determine fitness to drive was explored by means of logistic regression analyses, discriminant function analyses, as well as receiver operating curve analyses. We found that all three types of assessments are predictive of on-road driving performance. Neuropsychological assessment had the highest classification accuracy followed by driving simulator rides and clinical interviews. However, combining all three types of assessments yielded the best prediction for fitness to drive in patients with AD with an overall accuracy of 92.7%, which makes this method highly valid for assessing fitness to drive in AD. This method may be used to advise patients with AD and their family members about fitness to drive. PMID:26910535
Graphic tracings of condylar paths and measurements of condylar angles.
el-Gheriani, A S; Winstanley, R B
1989-01-01
A study was carried out to determine the accuracy of different methods of measuring condylar inclination from graphical recordings of condylar paths. Thirty subjects made protrusive mandibular movements while condylar inclination was recorded on a graph paper card. A mandibular facebow and intraoral central bearing plate facilitated the procedure. The first method proved to be too variable to be of value in measuring condylar angles. The spline curve fitting technique was shown to be accurate, but its use clinically may prove complex. The mathematical method was more practical and overcame the variability of the tangent method. Other conclusions regarding condylar inclination are outlined.
Modal testing with Asher's method using a Fourier analyzer and curve fitting
NASA Technical Reports Server (NTRS)
Gold, R. R.; Hallauer, W. L., Jr.
1979-01-01
An unusual application of the method proposed by Asher (1958) for structural dynamic and modal testing is discussed. Asher's method has the capability, using the admittance matrix and multiple-shaker sinusoidal excitation, of separating structural modes having indefinitely close natural frequencies. The present application uses Asher's method in conjunction with a modern Fourier analyzer system but eliminates the necessity of exciting the test structure simultaneously with several shakers. Evaluation of this approach with numerically simulated data demonstrated its effectiveness; the parameters of two modes having almost identical natural frequencies were accurately identified. Laboratory evaluation of this approach was inconclusive because of poor experimental input data.
del Moral, F; Vázquez, J A; Ferrero, J J; Willisch, P; Ramírez, R D; Teijeiro, A; López Medina, A; Andrade, B; Vázquez, J; Salvador, F; Medal, D; Salgado, M; Muñoz, V
2009-09-01
Modern radiotherapy uses complex treatments that necessitate more complex quality assurance procedures. As a continuous medium, GafChromic EBT films offer suitable features for such verification. However, its sensitometric curve is not fully understood in terms of classical theoretical models. In fact, measured optical densities and those predicted by the classical models differ significantly. This difference increases systematically with wider dose ranges. Thus, achieving the accuracy required for intensity-modulated radiotherapy (IMRT) by classical methods is not possible, plecluding their use. As a result, experimental parametrizations, such as polynomial fits, are replacing phenomenological expressions in modern investigations. This article focuses on identifying new theoretical ways to describe sensitometric curves and on evaluating the quality of fit for experimental data based on four proposed models. A whole mathematical formalism starting with a geometrical version of the classical theory is used to develop new expressions for the sensitometric curves. General results from the percolation theory are also used. A flat-bed-scanner-based method was chosen for the film analysis. Different tests were performed, such as consistency of the numeric results for the proposed model and double examination using data from independent researchers. Results show that the percolation-theory-based model provides the best theoretical explanation for the sensitometric behavior of GafChromic films. The different sizes of active centers or monomer crystals of the film are the basis of this model, allowing acquisition of information about the internal structure of the films. Values for the mean size of the active centers were obtained in accordance with technical specifications. In this model, the dynamics of the interaction between the active centers of GafChromic film and radiation is also characterized by means of its interaction cross-section value. The percolation model fulfills the accuracy requirements for quality-control procedures when large ranges of doses are used and offers a physical explanation for the film response.
SU-E-T-75: A Simple Technique for Proton Beam Range Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgdorf, B; Kassaee, A; Garver, E
2015-06-15
Purpose: To develop a measurement-based technique to verify the range of proton beams for quality assurance (QA). Methods: We developed a simple technique to verify the proton beam range with in-house fabricated devices. Two separate devices were fabricated; a clear acrylic rectangular cuboid and a solid polyvinyl chloride (PVC) step wedge. For efficiency in our clinic, we used the rectangular cuboid for double scattering (DS) beams and the step wedge for pencil beam scanning (PBS) beams. These devices were added to our QA phantom to measure dose points along the distal fall-off region (between 80% and 20%) in addition tomore » dose at mid-SOBP (spread out Bragg peak) using a two-dimensional parallel plate chamber array (MatriXX™, IBA Dosimetry, Schwarzenbruck, Germany). This method relies on the fact that the slope of the distal fall-off is linear and does not vary with small changes in energy. Using a multi-layer ionization chamber (Zebra™, IBA Dosimetry), percent depth dose (PDD) curves were measured for our standard daily QA beams. The range (energy) for each beam was then varied (i.e. ±2mm and ±5mm) and additional PDD curves were measured. The distal fall-off of all PDD curves was fit to a linear equation. The distal fall-off measured dose for a particular beam was used in our linear equation to determine the beam range. Results: The linear fit of the fall-off region for the PDD curves, when varying the range by a few millimeters for a specific QA beam, yielded identical slopes. The calculated range based on measured point dose(s) in the fall-off region using the slope resulted in agreement of ±1mm of the expected beam range. Conclusion: We developed a simple technique for accurately verifying the beam range for proton therapy QA programs.« less
Yang, Xiaolan; Hu, Xiaolei; Xu, Bangtian; Wang, Xin; Qin, Jialin; He, Chenxiong; Xie, Yanling; Li, Yuanli; Liu, Lin; Liao, Fei
2014-06-17
A fluorometric titration approach was proposed for the calibration of the quantity of monoclonal antibody (mcAb) via the quench of fluorescence of tryptophan residues. It applied to purified mcAbs recognizing tryptophan-deficient epitopes, haptens nonfluorescent at 340 nm under the excitation at 280 nm, or fluorescent haptens bearing excitation valleys nearby 280 nm and excitation peaks nearby 340 nm to serve as Förster-resonance-energy-transfer (FRET) acceptors of tryptophan. Titration probes were epitopes/haptens themselves or conjugates of nonfluorescent haptens or tryptophan-deficient epitopes with FRET acceptors of tryptophan. Under the excitation at 280 nm, titration curves were recorded as fluorescence specific for the FRET acceptors or for mcAbs at 340 nm. To quantify the binding site of a mcAb, a universal model considering both static and dynamic quench by either type of probes was proposed for fitting to the titration curve. This was easy for fitting to fluorescence specific for the FRET acceptors but encountered nonconvergence for fitting to fluorescence of mcAbs at 340 nm. As a solution, (a) the maximum of the absolute values of first-order derivatives of a titration curve as fluorescence at 340 nm was estimated from the best-fit model for a probe level of zero, and (b) molar quantity of the binding site of the mcAb was estimated via consecutive fitting to the same titration curve by utilizing such a maximum as an approximate of the slope for linear response of fluorescence at 340 nm to quantities of the mcAb. This fluorometric titration approach was proved effective with one mcAb for six-histidine and another for penicillin G.
Text vectorization based on character recognition and character stroke modeling
NASA Astrophysics Data System (ADS)
Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao
2014-03-01
In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.
Molecular dynamics simulations of the melting curve of NiAl alloy under pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Wenjin; Peng, Yufeng; Liu, Zhongli, E-mail: zhongliliu@yeah.net
2014-05-15
The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fitting the well-known Simon equation to our melting data, we yielded the melting curves for NiAl: 1783(1 + P/9.801){sup 0.298} (one-phase approach), 1850(1 + P/12.806){sup 0.357} (two-phase approach).more » The good agreement of the resulting equation of states and the zero-pressure melting point (calc., 1850 ± 25 K, exp., 1911 K) with experiment proved the correctness of these results. These melting data complemented the absence of experimental high-pressure melting of NiAl. To check the transferability of this EAM potential, we have also predicted the melting curves of pure nickel and pure aluminum. Results show the calculated melting point of Nickel agrees well with experiment at zero pressure, while the melting point of aluminum is slightly higher than experiment.« less
Research on Rigid Body Motion Tracing in Space based on NX MCD
NASA Astrophysics Data System (ADS)
Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang
2018-03-01
In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
ERIC Educational Resources Information Center
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
van der Laan, Mark J.; Hubbard, Alan E.; Steel, Cathy; Kubofcik, Joseph; Hamlin, Katy L.; Moss, Delynn M.; Nutman, Thomas B.; Priest, Jeffrey W.; Lammie, Patrick J.
2017-01-01
Background Serological antibody levels are a sensitive marker of pathogen exposure, and advances in multiplex assays have created enormous potential for large-scale, integrated infectious disease surveillance. Most methods to analyze antibody measurements reduce quantitative antibody levels to seropositive and seronegative groups, but this can be difficult for many pathogens and may provide lower resolution information than quantitative levels. Analysis methods have predominantly maintained a single disease focus, yet integrated surveillance platforms would benefit from methodologies that work across diverse pathogens included in multiplex assays. Methods/Principal findings We developed an approach to measure changes in transmission from quantitative antibody levels that can be applied to diverse pathogens of global importance. We compared age-dependent immunoglobulin G curves in repeated cross-sectional surveys between populations with differences in transmission for multiple pathogens, including: lymphatic filariasis (Wuchereria bancrofti) measured before and after mass drug administration on Mauke, Cook Islands, malaria (Plasmodium falciparum) before and after a combined insecticide and mass drug administration intervention in the Garki project, Nigeria, and enteric protozoans (Cryptosporidium parvum, Giardia intestinalis, Entamoeba histolytica), bacteria (enterotoxigenic Escherichia coli, Salmonella spp.), and viruses (norovirus groups I and II) in children living in Haiti and the USA. Age-dependent antibody curves fit with ensemble machine learning followed a characteristic shape across pathogens that aligned with predictions from basic mechanisms of humoral immunity. Differences in pathogen transmission led to shifts in fitted antibody curves that were remarkably consistent across pathogens, assays, and populations. Mean antibody levels correlated strongly with traditional measures of transmission intensity, such as the entomological inoculation rate for P. falciparum (Spearman’s rho = 0.75). In both high- and low transmission settings, mean antibody curves revealed changes in population mean antibody levels that were masked by seroprevalence measures because changes took place above or below the seropositivity cutoff. Conclusions/Significance Age-dependent antibody curves and summary means provided a robust and sensitive measure of changes in transmission, with greatest sensitivity among young children. The method generalizes to pathogens that can be measured in high-throughput, multiplex serological assays, and scales to surveillance activities that require high spatiotemporal resolution. Our results suggest quantitative antibody levels will be particularly useful to measure differences in exposure for pathogens that elicit a transient antibody response or for monitoring populations with very high- or very low transmission, when seroprevalence is less informative. The approach represents a new opportunity to conduct integrated serological surveillance for neglected tropical diseases, malaria, and other infectious diseases with well-defined antigen targets. PMID:28542223
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less
Farmer, William H.; Knight, Rodney R.; Eash, David A.; Kasey J. Hutchinson,; Linhart, S. Mike; Christiansen, Daniel E.; Archfield, Stacey A.; Over, Thomas M.; Kiang, Julie E.
2015-08-24
Daily records of streamflow are essential to understanding hydrologic systems and managing the interactions between human and natural systems. Many watersheds and locations lack streamgages to provide accurate and reliable records of daily streamflow. In such ungaged watersheds, statistical tools and rainfall-runoff models are used to estimate daily streamflow. Previous work compared 19 different techniques for predicting daily streamflow records in the southeastern United States. Here, five of the better-performing methods are compared in a different hydroclimatic region of the United States, in Iowa. The methods fall into three classes: (1) drainage-area ratio methods, (2) nonlinear spatial interpolations using flow duration curves, and (3) mechanistic rainfall-runoff models. The first two classes are each applied with nearest-neighbor and map-correlated index streamgages. Using a threefold validation and robust rank-based evaluation, the methods are assessed for overall goodness of fit of the hydrograph of daily streamflow, the ability to reproduce a daily, no-fail storage-yield curve, and the ability to reproduce key streamflow statistics. As in the Southeast study, a nonlinear spatial interpolation of daily streamflow using flow duration curves is found to be a method with the best predictive accuracy. Comparisons with previous work in Iowa show that the accuracy of mechanistic models with at-site calibration is substantially degraded in the ungaged framework.
Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models
NASA Astrophysics Data System (ADS)
Xia, Wei; Dai, Xiao-Xia; Feng, Yuan
2015-12-01
When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).
A correction method of the anode wire modulation for 2D MWPCs detector
NASA Astrophysics Data System (ADS)
Wen, Z. W.; Qi, H. R.; Zhang, Y. L.; Wang, H. Y.; Liu, L.; Li, Y. H.
2018-04-01
The linearity performance of 2D Multi-Wire Proportional Chambers (MWPCs) detector across the anode wires is modulated by the discrete anode wires. A MWPCs dectector with the 2 mm anode wire spacing was developed to study the anode wire modulation effect. The 2D lineartity performance was measured with a 55Fe source which was moved by a electric mobile platform. The experimental results show that the deviation of the measured position depends upon the incident position in the axis across the anode wires and the curve between the measured position and the incident position is consistent with the sine function whose period is equal to the anode wire spacing. A correction method of the measured position across the anode wire direction was obtained by fitting the curve between the measured position and the incident position. The non-linearity of the measured position across the anode wire direction is reduced about 0.085% and the imaging capability is obviously improved after the data is modified by the correction method.
GPU computing of compressible flow problems by a meshless method with space-filling curves
NASA Astrophysics Data System (ADS)
Ma, Z. H.; Wang, H.; Pu, S. H.
2014-04-01
A graphic processing unit (GPU) implementation of a meshless method for solving compressible flow problems is presented in this paper. Least-square fit is used to discretize the spatial derivatives of Euler equations and an upwind scheme is applied to estimate the flux terms. The compute unified device architecture (CUDA) C programming model is employed to efficiently and flexibly port the meshless solver from CPU to GPU. Considering the data locality of randomly distributed points, space-filling curves are adopted to re-number the points in order to improve the memory performance. Detailed evaluations are firstly carried out to assess the accuracy and conservation property of the underlying numerical method. Then the GPU accelerated flow solver is used to solve external steady flows over aerodynamic configurations. Representative results are validated through extensive comparisons with the experimental, finite volume or other available reference solutions. Performance analysis reveals that the running time cost of simulations is significantly reduced while impressive (more than an order of magnitude) speedups are achieved.
Metabolic Consequences of Hepatic Steatosis in Overweight and Obese Adolescents
Wicklow, Brandy A.; Wittmeier, Kristy D.M.; MacIntosh, Andrea C.; Sellers, Elizabeth A.C.; Ryner, Lawrence; Serrai, Hacene; Dean, Heather J.; McGavock, Jonathan M.
2012-01-01
OBJECTIVE To test the hypothesis that hepatic steatosis is associated with risk factors for type 2 diabetes in overweight and obese youth, mediated by cardiorespiratory fitness. RESEARCH DESIGN AND METHODS This was a cross-sectional study comparing insulin sensitivity between 30 overweight and obese adolescents with hepatic steatosis, 68 overweight and obese adolescents without hepatic steatosis, and 11 healthy weight adolescents without hepatic steatosis. Cardiorespiratory fitness was determined by a graded maximal exercise test on a cycle ergometer. Secondary outcomes included presence of metabolic syndrome and glucose response to a 75-g oral glucose challenge. RESULTS The presence of hepatic steatosis was associated with 55% lower insulin sensitivity (P = 0.02) and a twofold greater prevalence of metabolic syndrome (P = 0.001). Differences in insulin sensitivity (3.5 vs. 4.5 mU ⋅ kg−1 ⋅ min−1, P = 0.03), prevalence of metabolic syndrome (48 vs. 20%, P = 0.03), and glucose area under the curve (816 vs. 710, P = 0.04) remained between groups after matching for age, sex, and visceral fat. The association between hepatic steatosis and insulin sensitivity (β = −0.24, t = −2.29, P < 0.025), metabolic syndrome (β = −0.54, t = −5.8, P < 0.001), and glucose area under the curve (β = 0.33, t = 3.3, P < 0.001) was independent of visceral and whole-body adiposity. Cardiorespiratory fitness was not associated with hepatic steatosis, insulin sensitivity, or presence of metabolic syndrome. CONCLUSIONS Hepatic steatosis is associated with type 2 diabetes risk factors independent of cardiorespiratory fitness, whole-body adiposity, and visceral fat mass. PMID:22357180
Enhancements of Bayesian Blocks; Application to Large Light Curve Databases
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2015-01-01
Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Coral-Ghanem, Cleusa; Alves, Milton Ruiz
2008-01-01
To evaluate the clinical performance of Monocurve and Bicurve (Soper-McGuire design) rigid gas-permeable contact lens fitting in patients with keratoconus. A prospective and randomized comparative clinical trial was conducted with a minimum follow-up of six months in two groups of 63 patients. One group was fitted with Monocurve contact lenses and the other with Bicurve Soper-McGuire design. Study variables included fluoresceinic pattern of lens-to-cornea fitting relationship, location and morphology of the cone, presence and degree of punctate keratitis and other corneal surface alterations, topographic changes, visual acuity for distance corrected with contact lenses and survival analysis for remaining with the same contact lens design during the study. During the follow-up there was a decrease in the number of eyes with advanced and central cones fitted with Monocurve lenses, and an increase in those fitted with Soper-McGuire design. In the Monocurve group, a flattening of both the steepest and the flattest keratometric curve was observed. In the Soper-McGuire group, a steepening of the flattest keratometric curve and a flattening of the steepest keratometric curve were observed. There was a decrease in best-corrected visual acuity with contact lens in the Monocurve group. Survival analysis for the Monocurve lens was 60.32% and for the Soper-McGuire was 71.43% at a mean follow-up of six months. This study showed that due to the changes observed in corneal topography, the same contact lens design did not provide an ideal fitting for all patients during the follow-up period. The Soper-McGuire lenses had a better performance than the Monocurve lenses in advanced and central keratoconus.
STRONG LENS TIME DELAY CHALLENGE. II. RESULTS OF TDC1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Kai; Treu, Tommaso; Marshall, Phil
2015-02-10
We present the results of the first strong lens time delay challenge. The motivation, experimental design, and entry level challenge are described in a companion paper. This paper presents the main challenge, TDC1, which consisted of analyzing thousands of simulated light curves blindly. The observational properties of the light curves cover the range in quality obtained for current targeted efforts (e.g., COSMOGRAIL) and expected from future synoptic surveys (e.g., LSST), and include simulated systematic errors. Seven teams participated in TDC1, submitting results from 78 different method variants. After describing each method, we compute and analyze basic statistics measuring accuracy (ormore » bias) A, goodness of fit χ{sup 2}, precision P, and success rate f. For some methods we identify outliers as an important issue. Other methods show that outliers can be controlled via visual inspection or conservative quality control. Several methods are competitive, i.e., give |A| < 0.03, P < 0.03, and χ{sup 2} < 1.5, with some of the methods already reaching sub-percent accuracy. The fraction of light curves yielding a time delay measurement is typically in the range f = 20%-40%. It depends strongly on the quality of the data: COSMOGRAIL-quality cadence and light curve lengths yield significantly higher f than does sparser sampling. Taking the results of TDC1 at face value, we estimate that LSST should provide around 400 robust time-delay measurements, each with P < 0.03 and |A| < 0.01, comparable to current lens modeling uncertainties. In terms of observing strategies, we find that A and f depend mostly on season length, while P depends mostly on cadence and campaign duration.« less
The Co-simulation of Humanoid Robot Based on Solidworks, ADAMS and Simulink
NASA Astrophysics Data System (ADS)
Song, Dalei; Zheng, Lidan; Wang, Li; Qi, Weiwei; Li, Yanli
A simulation method of adaptive controller is proposed for the humanoid robot system based on co-simulation of Solidworks, ADAMS and Simulink. A complex mathematical modeling process is avoided by this method, and the real time dynamic simulating function of Simulink would be exerted adequately. This method could be generalized to other complicated control system. This method is adopted to build and analyse the model of humanoid robot. The trajectory tracking and adaptive controller design also proceed based on it. The effect of trajectory tracking is evaluated by fitting-curve theory of least squares method. The anti-interference capability of the robot is improved a lot through comparative analysis.
The GTC exoplanet transit spectroscopy survey . VII. An optical transmission spectrum of WASP-48b
NASA Astrophysics Data System (ADS)
Murgas, F.; Pallé, E.; Parviainen, H.; Chen, G.; Nortmann, L.; Nowak, G.; Cabrera-Lavers, A.; Iro, N.
2017-09-01
Context. Transiting planets offer an excellent opportunity for characterizing the atmospheres of extrasolar planets under very different conditions from those found in our solar system. Aims: We are currently carrying out a ground-based survey to obtain the transmission spectra of several extrasolar planets using the 10 m Gran Telescopio Canarias. In this paper we investigate the extrasolar planet WASP-48b, a hot Jupiter orbiting around an F-type star with a period of 2.14 days. Methods: We obtained long-slit optical spectroscopy of one transit of WASP-48b with the Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy (OSIRIS) spectrograph. We integrated the spectrum of WASP-48 and one reference star in several channels with different wavelength ranges, creating numerous color light curves of the transit. We fit analytic transit curves to the data taking into account the systematic effects present in the time series in an effort to measure the change of the planet-to-star radius ratio (Rp/Rs) across wavelength. The change in transit depth can be compared with atmosphere models to infer the presence of particular atomic or molecular compounds in the atmosphere of WASP-48b. Results: After removing the transit model and systematic trends to the curves we reached precisions between 261 ppm and 455-755 ppm for the white and spectroscopic light curves, respectively. We obtained Rp/Rs uncertainty values between 0.8 × 10-3 and 1.5 × 10-3 for all the curves analyzed in this work. The measured transit depth for the curves made by integrating the wavelength range between 530 nm and 905 nm is in agreement with previous studies. We report a relatively flat transmission spectrum for WASP-48b with no statistical significant detection of atmospheric species, although the theoretical models that fit the data more closely include TiO and VO. The transit light curves are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/605/A114
Barton, Zachary J; Rodríguez-López, Joaquín
2017-03-07
We report a method of precisely positioning a Hg-based ultramicroelectrode (UME) for scanning electrochemical microscopy (SECM) investigations of any substrate. Hg-based probes are capable of performing amalgamation reactions with metal cations, which avoid unwanted side reactions and positive feedback mechanisms that can prove problematic for traditional probe positioning methods. However, prolonged collection of ions eventually leads to saturation of the amalgam accompanied by irreversible loss of Hg. In order to obtain negative feedback positioning control without risking damage to the SECM probe, we implement cyclic voltammetry probe approach surfaces (CV-PASs), consisting of CVs performed between incremental motor movements. The amalgamation current, peak stripping current, and integrated stripping charge extracted from a shared CV-PAS give three distinct probe approach curves (CV-PACs), which can be used to determine the tip-substrate gap to within 1% of the probe radius. Using finite element simulations, we establish a new protocol for fitting any CV-PAC and demonstrate its validity with experimental results for sodium and potassium ions in propylene carbonate by obtaining over 3 orders of magnitude greater accuracy and more than 20-fold greater precision than existing methods. Considering the timescales of diffusion and amalgam saturation, we also present limiting conditions for obtaining and fitting CV-PAC data. The ion-specific signals isolated in CV-PACs allow precise and accurate positioning of Hg-based SECM probes over any sample and enable the deployment of CV-PAS SECM as an analytical tool for traditionally challenging conditions.
Wu, Yiping; Zhao, Xiaohua; Chen, Chen; He, Jiayuan; Rong, Jian; Ma, Jianming
2016-10-01
In China, the Chevron alignment sign on highways is a vertical rectangle with a white arrow and border on a blue background, which differs from its counterpart in other countries. Moreover, little research has been devoted to the effectiveness of China's Chevron signs; there is still no practical method to quantitatively describe the impact of Chevron signs on driver performance in roadway curves. In this paper, a driving simulator experiment collected data on the driving performance of 30 young male drivers as they navigated on 29 different horizontal curves under different conditions (presence of Chevron signs, curve radius and curve direction). To address the heterogeneity issue in the data, three models were estimated and tested: a pooled data linear regression model, a fixed effects model, and a random effects model. According to the Hausman Test and Akaike Information Criterion (AIC), the random effects model offers the best fit. The current study explores the relationship between driver performance (i.e., vehicle speed and lane position) and horizontal curves with respect to the horizontal curvature, presence of Chevron signs, and curve direction. This study lays a foundation for developing procedures and guidelines that would allow more uniform and efficient deployment of Chevron signs on China's highways. Copyright © 2016 Elsevier Ltd. All rights reserved.
He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia
2002-02-01
A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.
Verstraeten, B.; Sermeus, J.; Salenbien, R.; Fivez, J.; Shkerdin, G.; Glorieux, C.
2015-01-01
The underlying working principle of detecting impulsive stimulated scattering signals in a differential configuration of heterodyne diffraction detection is unraveled by involving optical scattering theory. The feasibility of the method for the thermoelastic characterization of coating-substrate systems is demonstrated on the basis of simulated data containing typical levels of noise. Besides the classical analysis of the photoacoustic part of the signals, which involves fitting surface acoustic wave dispersion curves, the photothermal part of the signals is analyzed by introducing thermal wave dispersion curves to represent and interpret their grating wavelength dependence. The intrinsic possibilities and limitations of both inverse problems are quantified by making use of least and most squares analysis. PMID:26236643
Activation cross-sections of proton induced reactions on vanadium in the 37-65 MeV energy range
NASA Astrophysics Data System (ADS)
Ditrói, F.; Tárkányi, F.; Takács, S.; Hermanne, A.
2016-08-01
Experimental excitation functions for proton induced reactions on natural vanadium in the 37-65 MeV energy range were measured with the activation method using a stacked foil irradiation technique. By using high resolution gamma spectrometry cross-section data for the production of 51,48Cr, 48V, 48,47,46,44m,44g,43Sc and 43,42K were determined. Comparisons with the earlier published data are presented and results predicted by different theoretical codes (EMPIRE and TALYS) are included. Thick target yields were calculated from a fit to our experimental excitation curves and compared with the earlier experimental yield data. Depth distribution curves to be used for thin layer activation (TLA) are also presented.
Design data for radars based on 13.9 GHz Skylab scattering coefficient measurements
NASA Technical Reports Server (NTRS)
Moore, R. K. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Measurements made at 13.9 GHz with the radar scatterometer on Skylab have been combined to produce median curves of the variation of scattering coefficient with angle of incidence out to 45 deg. Because of the large number of observations, and the large area averaged for each measured data point, these curves may be used as a new design base for radars. A reasonably good fit at larger angles is obtained using the theoretical expression based on an exponential height correlation function and also using Lambert's law. For angles under 10 deg, a different fit based on the exponential correlation function, and a fit based on geometric optics expressions are both reasonably valid.
Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.
VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T
2017-06-01
The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.
State estimation with incomplete nonlinear constraint
NASA Astrophysics Data System (ADS)
Huang, Yuan; Wang, Xueying; An, Wei
2017-10-01
A problem of state estimation with a new constraints named incomplete nonlinear constraint is considered. The targets are often move in the curve road, if the width of road is neglected, the road can be considered as the constraint, and the position of sensors, e.g., radar, is known in advance, this info can be used to enhance the performance of the tracking filter. The problem of how to incorporate the priori knowledge is considered. In this paper, a second-order sate constraint is considered. A fitting algorithm of ellipse is adopted to incorporate the priori knowledge by estimating the radius of the trajectory. The fitting problem is transformed to the nonlinear estimation problem. The estimated ellipse function is used to approximate the nonlinear constraint. Then, the typical nonlinear constraint methods proposed in recent works can be used to constrain the target state. Monte-Carlo simulation results are presented to illustrate the effectiveness proposed method in state estimation with incomplete constraint.
Ling, Ying; Zhang, Minqiang; Locke, Kenneth D; Li, Guangming; Li, Zonglong
2016-01-01
The Circumplex Scales of Interpersonal Values (CSIV) is a 64-item self-report measure of goals from each octant of the interpersonal circumplex. We used item response theory methods to compare whether dominance models or ideal point models best described how people respond to CSIV items. Specifically, we fit a polytomous dominance model called the generalized partial credit model and an ideal point model of similar complexity called the generalized graded unfolding model to the responses of 1,893 college students. The results of both graphical comparisons of item characteristic curves and statistical comparisons of model fit suggested that an ideal point model best describes the process of responding to CSIV items. The different models produced different rank orderings of high-scoring respondents, but overall the models did not differ in their prediction of criterion variables (agentic and communal interpersonal traits and implicit motives).
The distribution of mass for spiral galaxies in clusters and in the field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbes, D.A.; Whitmore, B.C.
1989-04-01
A comparison is made between the mass distributions of spiral galaxies in clusters and in the field using Burstein's mass-type methodology. Both the H-alpha emission-line rotation curves and more extended H I rotation curves are used. The fitting technique for determining mass types used by Burstein and coworkers has been replaced by an objective chi-sq method. Mass types are shown to be a function of both the Hubble type and luminosity, contrary to earlier results. The present data show a difference in the distribution of mass types for spiral galaxies in the field and in clusters, in the sense thatmore » mass type I galaxies, where the inner and outer velocity gradients are similar, are generally found in the field rather than in clusters. This can be understood in terms of the results of Whitmore, Forbes, and Rubin (1988), who find that the rotation curves of galaxies in the central region of clusters are generally failing, while the outer galaxies in a cluster and field galaxies tend to have flat or rising rotation curves. 15 refs.« less
NASA Astrophysics Data System (ADS)
Narang, Benjamin; Phillips, Michael; Knapp, Karen; Appelboam, Andy; Reuben, Adam; Slabaugh, Greg
2015-03-01
Assessment of the cervical spine using x-ray radiography is an important task when providing emergency room care to trauma patients suspected of a cervical spine injury. In routine clinical practice, a physician will inspect the alignment of the cervical spine vertebrae by mentally tracing three alignment curves along the anterior and posterior sides of the cervical vertebral bodies, as well as one along the spinolaminar junction. In this paper, we propose an algorithm to semi-automatically delineate the spinolaminar junction curve, given a single reference point and the corners of each vertebral body. From the reference point, our method extracts a region of interest, and performs template matching using normalized cross-correlation to find matching regions along the spinolaminar junction. Matching points are then fit to a third order spline, producing an interpolating curve. Experimental results demonstrate promising results, on average producing a modified Hausdorff distance of 1.8 mm, validated on a dataset consisting of 29 patients including those with degenerative change, retrolisthesis, and fracture.
Charge transfer kinetics at the solid-solid interface in porous electrodes
NASA Astrophysics Data System (ADS)
Bai, Peng; Bazant, Martin Z.
2014-04-01
Interfacial charge transfer is widely assumed to obey the Butler-Volmer kinetics. For certain liquid-solid interfaces, the Marcus-Hush-Chidsey theory is more accurate and predictive, but it has not been applied to porous electrodes. Here we report a simple method to extract the charge transfer rates in carbon-coated LiFePO4 porous electrodes from chronoamperometry experiments, obtaining curved Tafel plots that contradict the Butler-Volmer equation but fit the Marcus-Hush-Chidsey prediction over a range of temperatures. The fitted reorganization energy matches the Born solvation energy for electron transfer from carbon to the iron redox site. The kinetics are thus limited by electron transfer at the solid-solid (carbon-LixFePO4) interface rather than by ion transfer at the liquid-solid interface, as previously assumed. The proposed experimental method generalizes Chidsey’s method for phase-transforming particles and porous electrodes, and the results show the need to incorporate Marcus kinetics in modelling batteries and other electrochemical systems.
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
Nakagawa, Yoshihide; Amino, Mari; Inokuchi, Sadaki; Hayashi, Satoshi; Wakabayashi, Tsutomu; Noda, Tatsuya
2017-04-01
Amplitude spectral area (AMSA), an index for analysing ventricular fibrillation (VF) waveforms, is thought to predict the return of spontaneous circulation (ROSC) after electric shocks, but its validity is unconfirmed. We developed an equation to predict ROSC, where the change in AMSA (ΔAMSA) is added to AMSA measured immediately before the first shock (AMSA1). We examine the validity of this equation by comparing it with the conventional AMSA1-only equation. We retrospectively investigated 285 VF patients given prehospital electric shocks by emergency medical services. ΔAMSA was calculated by subtracting AMSA1 from last AMSA immediately before the last prehospital electric shock. Multivariate logistic regression analysis was performed using post-shock ROSC as a dependent variable. Analysis data were subjected to receiver operating characteristic curve analysis, goodness-of-fit testing using a likelihood ratio test, and the bootstrap method. AMSA1 (odds ratio (OR) 1.151, 95% confidence interval (CI) 1.086-1.220) and ΔAMSA (OR 1.289, 95% CI 1.156-1.438) were independent factors influencing ROSC induction by electric shock. Area under the curve (AUC) for predicting ROSC was 0.851 for AMSA1-only and 0.891 for AMSA1+ΔAMSA. Compared with the AMSA1-only equation, the AMSA1+ΔAMSA equation had significantly better goodness-of-fit (likelihood ratio test P<0.001) and showed good fit in the bootstrap method. Post-shock ROSC was accurately predicted by adding ΔAMSA to AMSA1. AMSA-based ROSC prediction enables application of electric shock to only those patients with high probability of ROSC, instead of interrupting chest compressions and delivering unnecessary shocks to patients with low probability of ROSC. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimation and detection information trade-off for x-ray system optimization
NASA Astrophysics Data System (ADS)
Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali
2016-05-01
X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Martin; /SLAC
2010-12-16
The study of the power density spectrum (PDS) of fluctuations in the X-ray flux from active galactic nuclei (AGN) complements spectral studies in giving us a view into the processes operating in accreting compact objects. An important line of investigation is the comparison of the PDS from AGN with those from galactic black hole binaries; a related area of focus is the scaling relation between time scales for the variability and the black hole mass. The PDS of AGN is traditionally modeled using segments of power laws joined together at so-called break frequencies; associations of the break time scales, i.e.,more » the inverses of the break frequencies, with time scales of physical processes thought to operate in these sources are then sought. I analyze the Method of Light Curve Simulations that is commonly used to characterize the PDS in AGN with a view to making the method as sensitive as possible to the shape of the PDS. I identify several weaknesses in the current implementation of the method and propose alternatives that can substitute for some of the key steps in the method. I focus on the complications introduced by uneven sampling in the light curve, the development of a fit statistic that is better matched to the distributions of power in the PDS, and the statistical evaluation of the fit between the observed data and the model for the PDS. Using archival data on one AGN, NGC 3516, I validate my changes against previously reported results. I also report new results on the PDS in NGC 4945, a Seyfert 2 galaxy with a well-determined black hole mass. This source provides an opportunity to investigate whether the PDS of Seyfert 1 and Seyfert 2 galaxies differ. It is also an attractive object for placement on the black hole mass-break time scale relation. Unfortunately, with the available data on NGC 4945, significant uncertainties on the break frequency in its PDS remain.« less
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Mamen, Asgeir; Fredriksen, Per Morten
2018-05-01
As children's fitness continues to decline, frequent and systematic monitoring of fitness is important. Easy-to-use and low-cost methods with acceptable accuracy are essential in screening situations. This study aimed to investigate how the measurements of body mass index (BMI), waist circumference (WC) and waist-to-height ratio (WHtR) relate to selected measurements of fitness in children. A total of 1731 children from grades 1 to 6 were selected who had a complete set of height, body mass, running performance, handgrip strength and muscle mass measurements. A composite fitness score was established from the sum of sex- and age-specific z-scores for the variables running performance, handgrip strength and muscle mass. This fitness z-score was compared to z-scores and quartiles of BMI, WC and WHtR using analysis of variance, linear regression and receiver operator characteristic analysis. The regression analysis showed that z-scores for BMI, WC and WHtR all were linearly related to the composite fitness score, with WHtR having the highest R 2 at 0.80. The correct classification of fit and unfit was relatively high for all three measurements. WHtR had the best prediction of fitness of the three with an area under the curve of 0.92 ( p < 0.001). BMI, WC and WHtR were all found to be feasible measurements, but WHtR had a higher precision in its classification into fit and unfit in this population.
Study of Dynamic Characteristics of Aeroelastic Systems Utilizing Randomdec Signatures
NASA Technical Reports Server (NTRS)
Chang, C. S.
1975-01-01
The feasibility of utilizing the random decrement method in conjunction with a signature analysis procedure to determine the dynamic characteristics of an aeroelastic system for the purpose of on-line prediction of potential on-set of flutter was examined. Digital computer programs were developed to simulate sampled response signals of a two-mode aeroelastic system. Simulated response data were used to test the random decrement method. A special curve-fit approach was developed for analyzing the resulting signatures. A number of numerical 'experiments' were conducted on the combined processes. The method is capable of determining frequency and damping values accurately from randomdec signatures of carefully selected lengths.
Multi-Filter Photometric Analysis of Three β Lyrae-type Eclipsing Binary Stars
NASA Astrophysics Data System (ADS)
Gardner, T.; Hahs, G.; Gokhale, V.
2015-12-01
We present light curve analysis of three variable stars, ASAS J105855+1722.2, NSVS 5066754, and NSVS 9091101. These objects are selected from a list of β- Lyrae candidates published by Hoffman et al. (2008). Light curves are generated using data collected at the the 31-inch NURO telescope at the Lowell Observatory in Flagstaff, Arizona in three filters: Bessell B, V, and R. Additional observations were made using the 14-inch Meade telescope at the Truman State Observatory in Kirksville, Missouri using Baader R, G, and B filters. In this paper, we present the light curves for these three objects and generate a truncated eight-term Fourier fit to these light curves. We use the Fourier coefficients from this fit to confirm ASAS J105855+1722.2 and NSVS 5066754 as β Lyrae type systems, and NSVS 9091101 to possibly be a RR Lyrae-type system. We measure the O'Connell effect observed in two of these systems (ASAS J105855+1722.2 and NSVS 5066754), and quantify this effect by calculating the "Light Curve Asymmetry" (LCA) and the "O'Connell Effect Ratio" (OER).
Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad
2018-03-24
The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.