Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary
2012-01-01
Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
NASA Astrophysics Data System (ADS)
McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.
2017-03-01
Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.
Franco, R; Aran, J M; Canela, E I
1991-01-01
A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe givesmore » a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.« less
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
The prediction of acoustical particle motion using an efficient polynomial curve fit procedure
NASA Technical Reports Server (NTRS)
Marshall, S. E.; Bernhard, R.
1984-01-01
A procedure is examined whereby the acoustic model parameters, natural frequencies and mode shapes, in the cavities of transportation vehicles are determined experimentally. The acoustic model shapes are described in terms of the particle motion. The acoustic modal analysis procedure is tailored to existing minicomputer based spectral analysis systems.
NASA Astrophysics Data System (ADS)
Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya
2018-03-01
A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method
NASA Astrophysics Data System (ADS)
Verachtert, R.; Lombaert, G.; Degrande, G.
2018-03-01
This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Focusing of light through turbid media by curve fitting optimization
NASA Astrophysics Data System (ADS)
Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi
2016-12-01
The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.
Enhancements of Bayesian Blocks; Application to Large Light Curve Databases
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2015-01-01
Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).
ERIC Educational Resources Information Center
Lazar, Ann A.; Zerbe, Gary O.
2011-01-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…
Li, Dong-Sheng; Xu, Hui-Mian; Han, Chun-Qi; Li, Ya-Ming
2010-01-01
AIM: To determine the effect of three digestive tract reconstruction procedures on pouch function, after radical surgery undertaken because of gastric cancer, as assessed by radionuclide dynamic imaging. METHODS: As a measure of the reservoir function, with a designed diet containing technetium-99m (99mTc), the emptying time of the gastric substitute was evaluated using a 99mTc-labeled solid test meal. Immediately after the meal, the patient was placed in front of a γ camera in a supine position and the radioactivity was measured over the whole abdomen every minute. A frame image was obtained. The emptying sequences were recorded by the microprocessor and then stored on a computer disk. According to a computer processing system, the half-emptying actual curve and the fitting curve of food containing isotope in the detected region were depicted, and the half-emptying actual curves of the three reconstruction procedures were directly compared. RESULTS: Of the three reconstruction procedures, the half-emptying time of food containing isotope in the Dual Braun type esophagojejunal anastomosis procedure (51.86 ± 6.43 min) was far closer to normal, significantly better than that of the proximal gastrectomy orthotopic reconstruction (30.07 ± 15.77 min, P = 0.002) and P type esophagojejunal anastomosis (27.88 ± 6.07 min, P = 0.001) methods. The half-emptying actual curve and fitting curves for the Dual Braun type esophagojejunal anastomosis were fairly similar while those of the proximal gastrectomy orthotopic reconstruction and P type esophagojejunal anastomosis were obviously separated, which indicated bad food conservation in the reconstructed pouches. CONCLUSION: Dual Braun type esophagojejunal anastomosis is the most useful of the three procedures for improving food accommodation in patients with a pouch and can retard evacuation of solid food from the reconstructed pouch. PMID:20238408
Structural-Vibration-Response Data Analysis
NASA Technical Reports Server (NTRS)
Smith, W. R.; Hechenlaible, R. N.; Perez, R. C.
1983-01-01
Computer program developed as structural-vibration-response data analysis tool for use in dynamic testing of Space Shuttle. Program provides fast and efficient time-domain least-squares curve-fitting procedure for reducing transient response data to obtain structural model frequencies and dampings from free-decay records. Procedure simultaneously identifies frequencies, damping values, and participation factors for noisy multiple-response records.
ERIC Educational Resources Information Center
Campo, Antonio; Rodriguez, Franklin
1998-01-01
Presents two alternative computational procedures for solving the modified Bessel equation of zero order: the Frobenius method, and the power series method coupled with a curve fit. Students in heat transfer courses can benefit from these alternative procedures; a course on ordinary differential equations is the only mathematical background that…
Study on peak shape fitting method in radon progeny measurement.
Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju
2015-11-01
Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Possible Transit Timing Variations of the TrES-3 Planetary System
NASA Astrophysics Data System (ADS)
Jiang, Ing-Guey; Yeh, Li-Chin; Thakur, Parijat; Wu, Yu-Ting; Chien, Ping; Lin, Yi-Ling; Chen, Hong-Yu; Hu, Juei-Hwa; Sun, Zhao; Ji, Jianghui
2013-03-01
Five newly observed transit light curves of the TrES-3 planetary system are presented. Together with other light-curve data from the literature, 23 transit light curves in total, which cover an overall timescale of 911 epochs, have been analyzed through a standard procedure. From these observational data, the system's orbital parameters are determined and possible transit timing variations (TTVs) are investigated. Given that a null TTV produces a fit with reduced χ2 = 1.52, our results agree with previous work, that TTVs might not exist in these data. However, a one-frequency oscillating TTV model, giving a fit with a reduced χ2 = 0.93, does possess a statistically higher probability. It is thus concluded that future observations and dynamical simulations for this planetary system will be very important.
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
POSSIBLE TRANSIT TIMING VARIATIONS OF THE TrES-3 PLANETARY SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Ing-Guey; Wu, Yu-Ting; Chien, Ping
2013-03-15
Five newly observed transit light curves of the TrES-3 planetary system are presented. Together with other light-curve data from the literature, 23 transit light curves in total, which cover an overall timescale of 911 epochs, have been analyzed through a standard procedure. From these observational data, the system's orbital parameters are determined and possible transit timing variations (TTVs) are investigated. Given that a null TTV produces a fit with reduced {chi}{sup 2} = 1.52, our results agree with previous work, that TTVs might not exist in these data. However, a one-frequency oscillating TTV model, giving a fit with a reducedmore » {chi}{sup 2} = 0.93, does possess a statistically higher probability. It is thus concluded that future observations and dynamical simulations for this planetary system will be very important.« less
Bartel, Thomas W.; Yaniv, Simone L.
1997-01-01
The 60 min creep data from National Type Evaluation Procedure (NTEP) tests performed at the National Institute of Standards and Technology (NIST) on 65 load cells have been analyzed in order to compare their creep and creep recovery responses, and to compare the 60 min creep with creep over shorter time periods. To facilitate this comparison the data were fitted to a multiple-term exponential equation, which adequately describes the creep and creep recovery responses of load cells. The use of such a curve fit reduces the effect of the random error in the indicator readings on the calculated values of the load cell creep. Examination of the fitted curves show that the creep recovery responses, after inversion by a change in sign, are generally similar in shape to the creep response, but smaller in magnitude. The average ratio of the absolute value of the maximum creep recovery to the maximum creep is 0.86; however, no reliable correlation between creep and creep recovery can be drawn from the data. The fitted curves were also used to compare the 60 min creep of the NTEP analysis with the 30 min creep and other parameters calculated according to the Organization Internationale de Métrologie Légale (OIML) R 60 analysis. The average ratio of the 30 min creep value to the 60 min value is 0.84. The OIML class C creep tolerance is less than 0.5 of the NTEP tolerance for classes III and III L. PMID:27805151
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
NASA Technical Reports Server (NTRS)
Welker, Jean Edward
1991-01-01
Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.
NASA Astrophysics Data System (ADS)
Kreyca, J. F.; Falahati, A.; Kozeschnik, E.
2016-03-01
For industry, the mechanical properties of a material in form of flow curves are essential input data for finite element simulations. Current practice is to obtain flow curves experimentally and to apply fitting procedures to obtain constitutive equations that describe the material response to external loading as a function of temperature and strain rate. Unfortunately, the experimental procedure for characterizing flow curves is complex and expensive, which is why the prediction of flow-curves by computer modelling becomes increasingly important. In the present work, we introduce a state parameter based model that is capable of predicting the flow curves of an A6061 aluminium alloy in different heat-treatment conditions. The model is implemented in the thermo-kinetic software package MatCalc and takes into account precipitation kinetics, subgrain formation, dynamic recovery by spontaneous annihilation and dislocation climb. To validate the simulation results, a series of compression tests is performed on the thermo-mechanical simulator Gleeble 1500.
Equations for Automotive-Transmission Performance
NASA Technical Reports Server (NTRS)
Chazanoff, S.; Aston, M. B.; Chapman, C. P.
1984-01-01
Curve-fitting procedure ensures high confidence levels. Threedimensional plot represents performance of small automatic transmission coasting in second gear. In equation for plot, PL power loss, S speed and T torque. Equations applicable to manual and automatic transmissions over wide range of speed, torque, and efficiency.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Waveform fitting and geometry analysis for full-waveform lidar feature extraction
NASA Astrophysics Data System (ADS)
Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu
2016-10-01
This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, J.S.; Gordon, R.L.; Lessor, D.L.
1981-08-01
Alternate measurement and data analysis procedures are discussed and compared for the application of reflective Nomarski differential interference contrast microscopy for the determination of surface slopes. The discussion includes the interpretation of a previously reported iterative procedure using the results of a detailed optical model and the presentation of a new procedure based on measured image intensity extrema. Surface slope determinations from these procedures are presented and compared with results from a previously reported curve fit analysis of image intensity data. The accuracy and advantages of the different procedures are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less
NASA Astrophysics Data System (ADS)
Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed
2016-12-01
The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant
2015-01-01
In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088
Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.
Roman, A; Ahmed, K; Challacombe, B
2016-05-01
Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Scatter of X-rays on polished surfaces
NASA Technical Reports Server (NTRS)
Hasinger, G.
1981-01-01
In investigating the dispersion properties of telescope mirrors used in X-ray astronomy, the slight scattering characteristics of X-ray radiation by statistically rough surfaces were examined. The mathematics and geometry of scattering theory are described. The measurement test assembly is described and results of measurements on samples of plane mirrors are given. Measurement results are evaluated. The direct beam, the convolution of the direct beam and the scattering halo, curve fitting by the method of least squares, various autocorrelation functions, results of the fitting procedure for small scattering, and deviations in the kernel of the scattering distribution are presented. A procedure for quality testing of mirror systems through diagnosis of rough surfaces is described.
The training and learning process of transseptal puncture using a modified technique.
Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom
2013-12-01
As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
Global search in photoelectron diffraction structure determination using genetic algorithms
NASA Astrophysics Data System (ADS)
Viana, M. L.; Díez Muiño, R.; Soares, E. A.; Van Hove, M. A.; de Carvalho, V. E.
2007-11-01
Photoelectron diffraction (PED) is an experimental technique widely used to perform structural determinations of solid surfaces. Similarly to low-energy electron diffraction (LEED), structural determination by PED requires a fitting procedure between the experimental intensities and theoretical results obtained through simulations. Multiple scattering has been shown to be an effective approach for making such simulations. The quality of the fit can be quantified through the so-called R-factor. Therefore, the fitting procedure is, indeed, an R-factor minimization problem. However, the topography of the R-factor as a function of the structural and non-structural surface parameters to be determined is complex, and the task of finding the global minimum becomes tough, particularly for complex structures in which many parameters have to be adjusted. In this work we investigate the applicability of the genetic algorithm (GA) global optimization method to this problem. The GA is based on the evolution of species, and makes use of concepts such as crossover, elitism and mutation to perform the search. We show results of its application in the structural determination of three different systems: the Cu(111) surface through the use of energy-scanned experimental curves; the Ag(110)-c(2 × 2)-Sb system, in which a theory-theory fit was performed; and the Ag(111) surface for which angle-scanned experimental curves were used. We conclude that the GA is a highly efficient method to search for global minima in the optimization of the parameters that best fit the experimental photoelectron diffraction intensities to the theoretical ones.
JMOSFET: A MOSFET parameter extractor with geometry-dependent terms
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Moore, B. T.
1985-01-01
The parameters from metal-oxide-silicon field-effect transistors (MOSFETs) that are included on the Combined Release and Radiation Effects Satellite (CRRES) test chips need to be extracted to have a simple but comprehensive method that can be used in wafer acceptance, and to have a method that is sufficiently accurate that it can be used in integrated circuits. A set of MOSFET parameter extraction procedures that are directly linked to the MOSFET model equations and that facilitate the use of simple, direct curve-fitting techniques are developed. In addition, the major physical effects that affect MOSFET operation in the linear and saturation regions of operation for devices fabricated in 1.2 to 3.0 mm CMOS technology are included. The fitting procedures were designed to establish single values for such parameters as threshold voltage and transconductance and to provide for slope matching between the linear and saturation regions of the MOSFET output current-voltage curves. Four different sizes of transistors that cover a rectangular-shaped region of the channel length-width plane are analyzed.
Drop shape visualization and contact angle measurement on curved surfaces.
Guilizzoni, Manfredo
2011-12-01
The shape and contact angles of drops on curved surfaces is experimentally investigated. Image processing, spline fitting and numerical integration are used to extract the drop contour in a number of cross-sections. The three-dimensional surfaces which describe the surface-air and drop-air interfaces can be visualized and a simple procedure to determine the equilibrium contact angle starting from measurements on curved surfaces is proposed. Contact angles on flat surfaces serve as a reference term and a procedure to measure them is proposed. Such procedure is not as accurate as the axisymmetric drop shape analysis algorithms, but it has the advantage of requiring only a side view of the drop-surface couple and no further information. It can therefore be used also for fluids with unknown surface tension and there is no need to measure the drop volume. Examples of application of the proposed techniques for distilled water drops on gemstones confirm that they can be useful for drop shape analysis and contact angle measurement on three-dimensional sculptured surfaces. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mattei, G.; Ahluwalia, A.
2018-04-01
We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.
Liquid-vapor relations for the system NaCl-H2O: summary of the P-T- x surface from 300° to 500°C
Bischoff, J.L.; Pitzer, Kenneth S.
1989-01-01
Experimental data on the vapor-liquid equilibrium relations for the system NaCl-H2O were compiled and compared in order to provide an improved estimate of the P-T-x surface between 300° to 500°C, a range for which the system changes from subcritical to critical behavior. Data for the three-phase curve (halite + liquid + vapor) and the NaCl-H2O critical curve were evaluated, and the best fits for these extrema then were used to guide selection of best fit for isothermal plots for the vapor-liquid region in-between. Smoothing was carried out in an iterative procedure by replotting the best-fit data as isobars and then as isopleths, until an internally consistent set of data was obtained. The results are presented in table form that will have application to theoretical modelling and to the understanding of two-phase behavior in saline geothermal systems.
NASA Astrophysics Data System (ADS)
Swensson, Richard G.; King, Jill L.; Good, Walter F.; Gur, David
2000-04-01
A constrained ROC formulation from probability summation is proposed for measuring observer performance in detecting abnormal findings on medical images. This assumes the observer's detection or rating decision on each image is determined by a latent variable that characterizes the specific finding (type and location) considered most likely to be a target abnormality. For positive cases, this 'maximum- suspicion' variable is assumed to be either the value for the actual target or for the most suspicious non-target finding, whichever is the greater (more suspicious). Unlike the usual ROC formulation, this constrained formulation guarantees a 'well-behaved' ROC curve that always equals or exceeds chance- level decisions and cannot exhibit an upward 'hook.' Its estimated parameters specify the accuracy for separating positive from negative cases, and they also predict accuracy in locating or identifying the actual abnormal findings. The present maximum-likelihood procedure (runs on PC with Windows 95 or NT) fits this constrained formulation to rating-ROC data using normal distributions with two free parameters. Fits of the conventional and constrained ROC formulations are compared for continuous and discrete-scale ratings of chest films in a variety of detection problems, both for localized lesions (nodules, rib fractures) and for diffuse abnormalities (interstitial disease, infiltrates or pnumothorax). The two fitted ROC curves are nearly identical unless the conventional ROC has an ill behaved 'hook,' below the constrained ROC.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Rapid Inversion of Angular Deflection Data for Certain Axisymmetric Refractive Index Distributions
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Greenberg, P. S.
1994-01-01
Certain functions useful for representing axisymmetric refractive-index distributions are shown to have exact solutions for Abel transformation of the resulting angular deflection data. An advantage of this procedure over direct numerical Abel inversion is that least-squares curve fitting is a smoothing process that reduces the noise sensitivity of the computation
USDA-ARS?s Scientific Manuscript database
Despite considerable efforts in developing the curve-fitting protocol to evaluate the crystallinity index (CI) from the X-ray diffraction (XRD) measurement, in its present state XRD procedure can only provide a qualitative or semi-quantitative assessment of the amounts of crystalline or amorphous po...
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
NASA Technical Reports Server (NTRS)
Cooper, D. B.; Yalabik, N.
1975-01-01
Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
NASA Astrophysics Data System (ADS)
Oh, Jun-Seok; Szili, Endre J.; Ogawa, Kotaro; Short, Robert D.; Ito, Masafumi; Furuta, Hiroshi; Hatta, Akimitsu
2018-01-01
Plasma-activated water (PAW) is receiving much attention in biomedical applications because of its reported potent bactericidal properties. Reactive oxygen and nitrogen species (RONS) that are generated in water upon plasma exposure are thought to be the key components in PAW that destroy bacterial and cancer cells. In addition to developing applications for PAW, it is also necessary to better understand the RONS chemistry in PAW in order to tailor PAW to achieve a specific biological response. With this in mind, we previously developed a UV-vis spectroscopy method using an automated curve fitting routine to quantify the changes in H2O2, NO2 -, NO3 - (the major long-lived RONS in PAW), and O2 concentrations. A major advantage of UV-vis is that it can take multiple measurements during plasma activation. We used the UV-vis procedure to accurately quantify the changes in the concentrations of these RONS and O2 in PAW. However, we have not yet provided an in-depth commentary of how we perform the curve fitting procedure or its implications. Therefore, in this study, we provide greater detail of how we use the curve fitting routine to derive the RONS and O2 concentrations in PAW. PAW was generated by treatment with a helium plasma jet. In addition, we employ UV-vis to study how the plasma jet exposure time and treatment distance affect the RONS chemistry and amount of O2 dissolved in PAW. We show that the plasma jet exposure time principally affects the total RONS concentration, but not the relative ratios of RONS, whereas the treatment distance affects both the total RONS concentration and the relative RONS concentrations.
Investigations Regarding Anesthesia during Hypovolemic Conditions.
1982-09-25
i / b ,- 18 For each level of hemoglobin, the equation was "normalized" to a pH of 7.400 for a BE of zero and a PCO of 40.0 torr, Orr et al. (171...the shifted BE values. Curve nomogram. Using the equations resulting from the above curve- fitting procedure, we calculated the relationship between pH...model for a given BE (i.e., pH = m i log PCO 2 + bi). Solve the following set of equations for pHind and log dX - 0 d(PHind) where X = (pHl - pHind) 2
Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects
Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.
2015-01-01
The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
A method for evaluating models that use galaxy rotation curves to derive the density profiles
NASA Astrophysics Data System (ADS)
de Almeida, Álefe O. F.; Piattella, Oliver F.; Rodrigues, Davi C.
2016-11-01
There are some approaches, either based on General Relativity (GR) or modified gravity, that use galaxy rotation curves to derive the matter density of the corresponding galaxy, and this procedure would either indicate a partial or a complete elimination of dark matter in galaxies. Here we review these approaches, clarify the difficulties on this inverted procedure, present a method for evaluating them, and use it to test two specific approaches that are based on GR: the Cooperstock-Tieu (CT) and the Balasin-Grumiller (BG) approaches. Using this new method, we find that neither of the tested approaches can satisfactorily fit the observational data without dark matter. The CT approach results can be significantly improved if some dark matter is considered, while for the BG approach no usual dark matter halo can improve its results.
NASA Astrophysics Data System (ADS)
Rangel-Kuoppa, Victor-Tapio; Albor-Aguilera, María-de-Lourdes; Hérnandez-Vásquez, César; Flores-Márquez, José-Manuel; González-Trujillo, Miguel-Ángel; Contreras-Puente, Gerardo-Silverio
2018-04-01
A new proposal for the extraction of the shunt resistance (R sh ) and saturation current (I sat ) of a current-voltage (I-V) measurement of a solar cell, within the one-diode model, is given. First, the Cheung method is extended to obtain the series resistance (R s ), the ideality factor (n) and an upper limit for I sat . In this article which is Part 1 of two parts, two procedures are proposed to obtain fitting values for R sh and I sat within some voltage range. These two procedures are used in two simulated I-V curves (one in darkness and the other one under illumination) to recover the known solar cell parameters R sh , R s , n, I sat and the light current I lig and test its accuracy. The method is compared with two different common parameter extraction methods. These three procedures are used and compared in Part 2 in the I-V curves of CdS-CdTe and CIGS-CdS solar cells.
Takehira, Rieko; Momose, Yasunori; Yamamura, Shigeo
2010-10-15
A pattern-fitting procedure using an X-ray diffraction pattern was applied to the quantitative analysis of binary system of crystalline pharmaceuticals in tablets. Orthorhombic crystals of isoniazid (INH) and mannitol (MAN) were used for the analysis. Tablets were prepared under various compression pressures using a direct compression method with various compositions of INH and MAN. Assuming that X-ray diffraction pattern of INH-MAN system consists of diffraction intensities from respective crystals, observed diffraction intensities were fitted to analytic expression based on X-ray diffraction theory and separated into two intensities from INH and MAN crystals by a nonlinear least-squares procedure. After separation, the contents of INH were determined by using the optimized normalization constants for INH and MAN. The correction parameter including all the factors that are beyond experimental control was required for quantitative analysis without calibration curve. The pattern-fitting procedure made it possible to determine crystalline phases in the range of 10-90% (w/w) of the INH contents. Further, certain characteristics of the crystals in the tablets, such as the preferred orientation, size of crystallite, and lattice disorder were determined simultaneously. This method can be adopted to analyze compounds whose crystal structures are known. It is a potentially powerful tool for the quantitative phase analysis and characterization of crystals in tablets and powders using X-ray diffraction patterns. Copyright 2010 Elsevier B.V. All rights reserved.
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Baumgaertner, A.
2016-07-01
We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.
[Comparison among various software for LMS growth curve fitting methods].
Han, Lin; Wu, Wenhong; Wei, Qiuxia
2015-03-01
To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.
On the convexity of ROC curves estimated from radiological test results.
Pesce, Lorenzo L; Metz, Charles E; Berbaum, Kevin S
2010-08-01
Although an ideal observer's receiver operating characteristic (ROC) curve must be convex-ie, its slope must decrease monotonically-published fits to empirical data often display "hooks." Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This article aims to identify the practical implications of nonconvex ROC curves and the conditions that can lead to empirical or fitted ROC curves that are not convex. This article views nonconvex ROC curves from historical, theoretical, and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve does not cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any nonconvex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. In general, ROC curve fits that show hooks should be looked on with suspicion unless other arguments justify their presence. 2010 AUR. Published by Elsevier Inc. All rights reserved.
On the convexity of ROC curves estimated from radiological test results
Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.
2010-01-01
Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155
Curve fitting methods for solar radiation data modeling
NASA Astrophysics Data System (ADS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less
Sando, Steven K.; Driscoll, Daniel G.; Parrett, Charles
2008-01-01
Numerous users, including the South Dakota Department of Transportation, have continuing needs for peak-flow information for the design of highway infrastructure and many other purposes. This report documents results from a cooperative study between the South Dakota Department of Transportation and the U.S. Geological Survey to provide an update of peak-flow frequency estimates for South Dakota. Estimates of peak-flow magnitudes for 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals are reported for 272 streamflow-gaging stations, which include most gaging stations in South Dakota with 10 or more years of systematic peak-flow records through water year 2001. Recommended procedures described in Bulletin 17B were used as primary guidelines for developing peak-flow frequency estimates. The computer program PEAKFQ developed by the U.S. Geological Survey was used to run the frequency analyses. Flood frequencies for all stations were initially analyzed by using standard Bulletin 17B default procedures for fitting the log-Pearson III distribution. The resulting preliminary frequency curves were then plotted on a log-probability scale, and fits of the curves with systematic data were evaluated. In many cases, results of the default Bulletin 17B analyses were determined to be satisfactory. In other cases, however, the results could be improved by using various alternative procedures for frequency analysis. Alternative procedures for some stations included adjustments to skew coefficients or use of user-defined low-outlier criteria. Peak-flow records for many gaging stations are strongly influenced by low- or zero-flow values. This situation often results in a frequency curve that plots substantially above the systematic record data points at the upper end of the frequency curve. Adjustments to low-outlier criteria reduced the influence of very small peak flows and generally focused the analyses on the upper parts of the frequency curves (10- to 500-year recurrence intervals). The most common alternative procedures involved several different methods to extend systematic records, which was done primarily to address biases resulting from nonrepresentative climatic conditions during several specific periods of record and to reduce inconsistencies among multiple gaging stations along common stream channels with different periods of record. In some cases, records for proximal stations could be combined directly. In other cases, the two-station comparison procedure recommended in Bulletin 17B was used to adjust the mean and standard deviation of the logs of the systematic data for a target station on the basis of correlation with concurrent records from a nearby long-term index station. In some other cases, a 'mixed-station procedure' was used to adjust the log-distributional parameters for a target station, on the basis of correlation with one or more index stations, for the purpose of fitting the log-Pearson III distribution. Historical adjustment procedures were applied to peak-flow frequency analyses for 17 South Dakota gaging stations. A historical adjustment period extending back to 1881 (121 years) was used for 12 gaging stations in the James and Big Sioux River Basins, and various other adjustment periods were used for additional stations. Large peak flows that occurred in 1969 and 1997 accounted for 13 of the 17 historical adjustments. Other years for which historical peak flows were used include 1957, 1962, 1992, and 2001. A regional mixed-population analysis was developed to address complications associated with many high outliers for the Black Hills region. This analysis included definition of two populations of flood events. The population of flood events that composes the main body of peak flows for a given station is considered the 'ordinary-peaks population,' and the population of unusually large peak flows that plot substantially above the main body of peak flows on log-probability scale is co
Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R
2012-02-01
The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.
Non-Boltzmann Modeling for Air Shock-Layer Radiation at Lunar-Return Conditions
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth
2008-01-01
This paper investigates the non-Boltzmann modeling of the radiating atomic and molecular electronic states present in lunar-return shock-layers. The Master Equation is derived for a general atom or molecule while accounting for a variety of excitation and de-excitation mechanisms. A new set of electronic-impact excitation rates is compiled for N, O, and N2+, which are the main radiating species for most lunar-return shock-layers. Based on these new rates, a novel approach of curve-fitting the non-Boltzmann populations of the radiating atomic and molecular states is developed. This new approach provides a simple and accurate method for calculating the atomic and molecular non-Boltzmann populations while avoiding the matrix inversion procedure required for the detailed solution of the Master Equation. The radiative flux values predicted by the present detailed non-Boltzmann model and the approximate curve-fitting approach are shown to agree within 5% for the Fire 1634 s case.
Pawlikowski, Marek; Jankowski, Krzysztof; Skalski, Konstanty
2018-05-30
A new constitutive model for human trabecular bone is presented in the present study. As the model is based on indentation tests performed on single trabeculae it is formulated in a microscale. The constitutive law takes into account non-linear viscoelasticity of the tissue. The elastic response is described by the hyperelastic Mooney-Rivlin model while the viscoelastic effects are considered by means of the hereditary integral in which stress depends on both time and strain. The material constants in the constitutive equation are identified on the basis of the stress relaxation tests and the indentation tests using curve-fitting procedure. The constitutive model is implemented into finite element package Abaqus ® by means of UMAT subroutine. The curve-fitting error is low and the viscoelastic behaviour of the tissue predicted by the proposed constitutive model corresponds well to the realistic response of the trabecular bone. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Prediction Analysis for Measles Epidemics
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi; Olsen, Lars Folke; Kobayashi, Nobumichi
2003-12-01
A newly devised procedure of prediction analysis, which is a linearized version of the nonlinear least squares method combined with the maximum entropy spectral analysis method, was proposed. This method was applied to time series data of measles case notification in several communities in the UK, USA and Denmark. The dominant spectral lines observed in each power spectral density (PSD) can be safely assigned as fundamental periods. The optimum least squares fitting (LSF) curve calculated using these fundamental periods can essentially reproduce the underlying variation of the measles data. An extension of the LSF curve can be used to predict measles case notification quantitatively. Some discussions including a predictability of chaotic time series are presented.
NASA Technical Reports Server (NTRS)
Schiess, James R.; Kerr, Patricia A.; Smith, Olivia C.
1988-01-01
Smooth curves drawn among plotted data easily. Rational-Spline Approximation with Automatic Tension Adjustment algorithm leads to flexible, smooth representation of experimental data. "Tension" denotes mathematical analog of mechanical tension in spline or other mechanical curve-fitting tool, and "spline" as denotes mathematical generalization of tool. Program differs from usual spline under tension, allows user to specify different values of tension between adjacent pairs of knots rather than constant tension over entire range of data. Subroutines use automatic adjustment scheme that varies tension parameter for each interval until maximum deviation of spline from line joining knots less than or equal to amount specified by user. Procedure frees user from drudgery of adjusting individual tension parameters while still giving control over local behavior of spline.
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
Amato, Ernesto; Campennì, Alfredo; Leotta, Salvatore; Ruggeri, Rosaria M; Baldari, Sergio
2016-06-01
Radioiodine therapy is an effective and safe treatment of hyperthyroidism due to Graves' disease, toxic adenoma, toxic multinodular goiter. We compared the outcomes of a traditional calculation method based on an analytical fit of the uptake curve and subsequent dose calculation with the MIRD approach, and an alternative computation approach based on a formulation implemented in a public-access website, searching for the best timing of radioiodine uptake measurements in pre-therapeutic dosimetry. We report about sixty-nine hyperthyroid patients that were treated after performing a pre-therapeutic dosimetry calculated by fitting a six-point uptake curve (3-168h). In order to evaluate the results of the radioiodine treatment, patients were followed up to sixty-four months after treatment (mean 47.4±16.9). Patient dosimetry was then retrospectively recalculated with the two above-mentioned methods. Several time schedules for uptake measurements were considered, with different timings and total number of points. Early time schedules, sampling uptake up to 48h, do not allow to set-up an accurate treatment plan, while schedules including the measurement at one week give significantly better results. The analytical fit procedure applied to the three-point time schedule 3(6)-24-168h gave results significantly more accurate than the website approach exploiting either the same schedule, or the single measurement at 168h. Consequently, the best strategy among the ones considered is to sample the uptake at 3(6)-24-168h, and carry out an analytical fit of the curve, while extra measurements at 48 and 72h lead only marginal improvements in the accuracy of therapeutic activity determination. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.
van Battum, L J; Huizenga, H
2006-07-01
Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
NASA Astrophysics Data System (ADS)
Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik
2017-11-01
To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.
Testing Modified Newtonian Dynamics with Low Surface Brightness Galaxies: Rotation Curve FITS
NASA Astrophysics Data System (ADS)
de Blok, W. J. G.; McGaugh, S. S.
1998-11-01
We present modified Newtonian dynamics (MOND) fits to 15 rotation curves of low surface brightness (LSB) galaxies. Good fits are readily found, although for a few galaxies minor adjustments to the inclination are needed. Reasonable values for the stellar mass-to-light ratios are found, as well as an approximately constant value for the total (gas and stars) mass-to-light ratio. We show that the LSB galaxies investigated here lie on the one, unique Tully-Fisher relation, as predicted by MOND. The scatter on the Tully-Fisher relation can be completely explained by the observed scatter in the total mass-to-light ratio. We address the question of whether MOND can fit any arbitrary rotation curve by constructing a plausible fake model galaxy. While MOND is unable to fit this hypothetical galaxy, a normal dark-halo fit is readily found, showing that dark matter fits are much less selective in producing fits. The good fits to rotation curves of LSB galaxies support MOND, especially because these are galaxies with large mass discrepancies deep in the MOND regime.
del Moral, F; Vázquez, J A; Ferrero, J J; Willisch, P; Ramírez, R D; Teijeiro, A; López Medina, A; Andrade, B; Vázquez, J; Salvador, F; Medal, D; Salgado, M; Muñoz, V
2009-09-01
Modern radiotherapy uses complex treatments that necessitate more complex quality assurance procedures. As a continuous medium, GafChromic EBT films offer suitable features for such verification. However, its sensitometric curve is not fully understood in terms of classical theoretical models. In fact, measured optical densities and those predicted by the classical models differ significantly. This difference increases systematically with wider dose ranges. Thus, achieving the accuracy required for intensity-modulated radiotherapy (IMRT) by classical methods is not possible, plecluding their use. As a result, experimental parametrizations, such as polynomial fits, are replacing phenomenological expressions in modern investigations. This article focuses on identifying new theoretical ways to describe sensitometric curves and on evaluating the quality of fit for experimental data based on four proposed models. A whole mathematical formalism starting with a geometrical version of the classical theory is used to develop new expressions for the sensitometric curves. General results from the percolation theory are also used. A flat-bed-scanner-based method was chosen for the film analysis. Different tests were performed, such as consistency of the numeric results for the proposed model and double examination using data from independent researchers. Results show that the percolation-theory-based model provides the best theoretical explanation for the sensitometric behavior of GafChromic films. The different sizes of active centers or monomer crystals of the film are the basis of this model, allowing acquisition of information about the internal structure of the films. Values for the mean size of the active centers were obtained in accordance with technical specifications. In this model, the dynamics of the interaction between the active centers of GafChromic film and radiation is also characterized by means of its interaction cross-section value. The percolation model fulfills the accuracy requirements for quality-control procedures when large ranges of doses are used and offers a physical explanation for the film response.
Determination of heat transfer coefficients in plastic French straws plunged in liquid nitrogen.
Santos, M Victoria; Sansinena, M; Chirife, J; Zaritzky, N
2014-12-01
The knowledge of the thermodynamic process during the cooling of reproductive biological systems is important to assess and optimize the cryopreservation procedures. The time-temperature curve of a sample immersed in liquid nitrogen enables the calculation of cooling rates and helps to determine whether it is vitrified or undergoes phase change transition. When dealing with cryogenic liquids, the temperature difference between the solid and the sample is high enough to cause boiling of the liquid, and the sample can undergo different regimes such as film and/or nucleate pool boiling. In the present work, the surface heat transfer coefficients (h) for plastic French straws plunged in liquid nitrogen were determined using the measurement of time-temperature curves. When straws filled with ice were used the cooling curve showed an abrupt slope change which was attributed to the transition of film into nucleate pool boiling regime. The h value that fitted each stage of the cooling process was calculated using a numerical finite element program that solves the heat transfer partial differential equation under transient conditions. In the cooling process corresponding to film boiling regime, the h that best fitted experimental results was h=148.12±5.4 W/m(2) K and for nucleate-boiling h=1355±51 W/m(2) K. These values were further validated by predicting the time-temperature curve for French straws filled with a biological fluid system (bovine semen-extender) which undergoes freezing. Good agreement was obtained between the experimental and predicted temperature profiles, further confirming the accuracy of the h values previously determined for the ice-filled straw. These coefficients were corroborated using literature correlations. The determination of the boiling regimes that govern the cooling process when plunging straws in liquid nitrogen constitutes an important issue when trying to optimize cryopreservation procedures. Furthermore, this information can lead to improvements in the design of cooling devices in the cryobiology field. Copyright © 2014 Elsevier Inc. All rights reserved.
Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.; Taylor, Aaron B.
2009-01-01
Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…
Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.
Bandos, Andriy I; Guo, Ben; Gur, David
2017-02-01
The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply unreliable AUC-based inferences. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M
2013-02-05
This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.
Color difference threshold determination for acrylic denture base resins.
Ren, Jiabao; Lin, Hong; Huang, Qingmei; Liang, Qifan; Zheng, Gang
2015-01-01
This study aimed to set evaluation indicators, i.e., perceptibility and acceptability color difference thresholds, of color stability for acrylic denture base resins for a spectrophotometric assessing method, which offered an alternative to the visual method described in ISO 20795-1:2013. A total of 291 disk specimens 50±1 mm in diameter and 0.5±0.1 mm thick were prepared (ISO 20795-1:2013) and processed through radiation tests in an accelerated aging chamber (ISO 7491:2000) for increasing times of 0 to 42 hours. Color alterations were measured with a spectrophotometer and evaluated using the CIE L*a*b* colorimetric system. Color differences were calculated through the CIEDE2000 color difference formula. Thirty-two dental professionals without color vision deficiencies completed perceptibility and acceptability assessments under controlled conditions in vitro. An S-curve fitting procedure was used to analyze the 50:50% perceptibility and acceptability thresholds. Furthermore, perceptibility and acceptability against the differences of the three color attributes, lightness, chroma, and hue, were also investigated. According to the S-curve fitting procedure, the 50:50% perceptibility threshold was 1.71ΔE00 (r(2)=0.88) and the 50:50% acceptability threshold was 4.00 ΔE00 (r(2)=0.89). Within the limitations of this study, 1.71/4.00 ΔE00 could be used as perceptibility/acceptability thresholds for acrylic denture base resins.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
Cassette, Philippe
2016-03-01
In Liquid Scintillation Counting (LSC), the scintillating source is part of the measurement system and its detection efficiency varies with the scintillator used, the vial and the volume and the chemistry of the sample. The detection efficiency is generally determined using a quenching curve, describing, for a specific radionuclide, the relationship between a quenching index given by the counter and the detection efficiency. A quenched set of LS standard sources are prepared by adding a quenching agent and the quenching index and detection efficiency are determined for each source. Then a simple formula is fitted to the experimental points to define the quenching curve function. The paper describes a software package specifically devoted to the determination of quenching curves with uncertainties. The experimental measurements are described by their quenching index and detection efficiency with uncertainties on both quantities. Random Gaussian fluctuations of these experimental measurements are sampled and a polynomial or logarithmic function is fitted on each fluctuation by χ(2) minimization. This Monte Carlo procedure is repeated many times and eventually the arithmetic mean and the experimental standard deviation of each parameter are calculated, together with the covariances between these parameters. Using these parameters, the detection efficiency, corresponding to an arbitrary quenching index within the measured range, can be calculated. The associated uncertainty is calculated with the law of propagation of variances, including the covariance terms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Toward a Micro-Scale Acoustic Direction-Finding Sensor with Integrated Electronic Readout
2013-06-01
measurements with curve fits . . . . . . . . . . . . . . . 20 Figure 2.10 Failure testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22...2.1 Sensor parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Table 2.2 Curve fit parameters...elastic, the quantity of interest is the elastic stiffness. In a typical nanoindentation test, the loading curve is nonlinear due to combined plastic
Transonic Compressor: Program System TXCO for Data Acquisition and On-Line Reduction.
1980-10-01
IMONIDAYIYEARIHOUR,IMINISEC) OS16 C ............................................................... (0S17 C 0SiB C Gel dole ond line and convert the...linear curve fits SECON real intercept of linear curve fit (as from CURVE) 65 - . FLOW CHART SUBROUTINE CALIB - - - Aso C’A / oonre& *Go wSAt*irc
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
NASA Astrophysics Data System (ADS)
Conley, A.; Goldhaber, G.; Wang, L.; Aldering, G.; Amanullah, R.; Commins, E. D.; Fadeyev, V.; Folatelli, G.; Garavini, G.; Gibbons, R.; Goobar, A.; Groom, D. E.; Hook, I.; Howell, D. A.; Kim, A. G.; Knop, R. A.; Kowalski, M.; Kuznetsova, N.; Lidman, C.; Nobili, S.; Nugent, P. E.; Pain, R.; Perlmutter, S.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Strovink, M.; Thomas, R. C.; Wood-Vasey, W. M.; Supernova Cosmology Project
2006-06-01
We present measurements of Ωm and ΩΛ from a blind analysis of 21 high-redshift supernovae using a new technique (CMAGIC) for fitting the multicolor light curves of Type Ia supernovae, first introduced by Wang and coworkers. CMAGIC takes advantage of the remarkably simple behavior of Type Ia supernovae on color-magnitude diagrams and has several advantages over current techniques based on maximum magnitudes. Among these are a reduced sensitivity to host galaxy dust extinction, a shallower luminosity-width relation, and the relative simplicity of the fitting procedure. This allows us to provide a cross-check of previous supernova cosmology results, despite the fact that current data sets were not observed in a manner optimized for CMAGIC. We describe the details of our novel blindness procedure, which is designed to prevent experimenter bias. The data are broadly consistent with the picture of an accelerating universe and agree with a flat universe within 1.7 σ, including systematics. We also compare the CMAGIC results directly with those of a maximum magnitude fit to the same supernovae, finding that CMAGIC favors more acceleration at the 1.6 σ level, including systematics and the correlation between the two measurements. A fit for w assuming a flat universe yields a value that is consistent with a cosmological constant within 1.2 σ.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Edge detection and mathematic fitting for corneal surface with Matlab software.
Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na
2017-01-01
To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.
Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.
Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J
1996-07-01
We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.
Optical Rotation Curves and Linewidths for Tully-Fisher Applications
NASA Astrophysics Data System (ADS)
Courteau, Stephane
1997-12-01
We present optical long-slit rotation curves for 304 northern Sb-Sc UGC galaxies from a sample designed for Tully-Fisher (TF) applications. Matching r-band photometry exists for each galaxy. We describe the procedures of rotation curve (RC) extraction and construction of optical profiles analogous to 21 cm integrated linewidths. More than 20% of the galaxies were observed twice or more, allowing for a proper determination of systematic errors. Various measures of maximum rotational velocity to be used as input in the TF relation are tested on the basis of their repeatability, minimization of TF scatter, and match with 21 cm linewidths. The best measure of TF velocity, V2.2 is given at the location of peak rotational velocity of a pure exponential disk. An alternative measure to V2.2 which makes no assumption about the luminosity profile or shape of the rotation curve is Vhist, the 20% width of the velocity histogram, though the match with 21 cm linewidths is not as good. We show that optical TF calibrations yield internal scatter comparable to, if not smaller than, the best calibrations based on single-dish 21 cm radio linewidths. Even though resolved H I RCs are more extended than their optical counterpart, a tight match between optical and radio linewidths exists since the bulk of the H I surface density is enclosed within the optical radius. We model the 304 RCs presented here plus a sample of 958 curves from Mathewson et al. (1992, APJS, 81, 413) with various fitting functions. An arctan function provides an adequate simple fit (not accounting for non-circular motions and spiral arms). More elaborate, empirical models may yield a better match at the expense of strong covariances. We caution against physical or "universal" parametrizations for TF applications.
Materials and Modulators for 3D Displays
2002-08-01
1243 nm. 0, 180 and 360 deg. in this plot correspond to parallel polarization. The dashed curve is a cos2(θ) fit to the data with a constant value...dwell time (solid bold curve ), 10 µs dwell time (dashed bold curve ) and static case (thin dashed curve ). 26 Figure. 20. Schematics of free-space...photon. The two peaks in the two photon spectrum can be fit by two Lorentzian curves . These spectra indicate that in the rhodamine B molecule the
Automated generation of influence functions for planar crack problems
NASA Technical Reports Server (NTRS)
Sire, Robert A.; Harris, David O.; Eason, Ernest D.
1989-01-01
A numerical procedure for the generation of influence functions for Mode I planar problems is described. The resulting influence functions are in a form for convenient evaluation of stress-intensity factors for complex stress distributions. Crack surface displacements are obtained by a least-squares solution of the Williams eigenfunction expansion for displacements in a cracked body. Discrete values of the influence function, evaluated using the crack surface displacements, are curve fit using an assumed functional form. The assumed functional form includes appropriate limit-behavior terms for very deep and very shallow cracks. Continuous representation of the influence function provides a convenient means for evaluating stress-intensity factors for arbitrary stress distributions by numerical integration. The procedure is demonstrated for an edge-cracked strip and a radially cracked disk. Comparisons with available published results demonstrate the accuracy of the procedure.
NASA Astrophysics Data System (ADS)
Bogani, F.; Borchi, E.; Bruzzi, M.; Leroy, C.; Sciortino, S.
1997-02-01
The thermoluminescent (TL) response of Chemical Vapour Deposited (CVD) diamond films to beta irradiation has been investigated. A numerical curve-fitting procedure, calibrated by means of a set of LiF TLD100 experimental spectra, has been developed to deconvolute the complex structured TL glow curves. The values of the activation energy and of the frequency factor related to each of the TL peaks involved have been determined. The TL response of the CVD diamond films to beta irradiation has been compared with the TL response of a set of LiF TLD100 and TLD700 dosimeters. The results have been discussed and compared in view of an assessment of the efficiency of CVD diamond films in future applications as in vivo dosimeters.
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
Li, Hui
2009-03-01
To construct the growth standardized data and curves based on weight, length/height, head circumference for Chinese children under 7 years of age. Random cluster sampling was used. The fourth national growth survey of children under 7 years in the nine cities (Beijing, Harbin, Xi'an, Shanghai, Nanjing, Wuhan, Fuzhou, Guangzhou and Kunming) of China was performed in 2005 and from this survey, data of 69 760 urban healthy boys and girls were used to set up the database for weight-for-age, height-for-age (length was measured for children under 3 years) and head circumference-for-age. Anthropometric data were ascribed to rigorous methods of data collection and standardized procedures across study sites. LMS method based on BOX-COX normal transformation and cubic splines smoothing technique was chosen for fitting the raw data according to study design and data features, and standardized values of any percentile and standard deviation were obtained by the special formulation of L, M and S parameters. Length-for-age and height-for-age standards were constructed by fitting the same model but the final curves reflected the 0.7 cm average difference between these two measurements. A set of systematic diagnostic tools was used to detect possible biases in estimated percentiles or standard deviation curves, including chi2 test, which was used for reference to evaluate to the goodness of fit. The 3rd, 10th, 25th, 50th, 75th, 90th, 97th smoothed percentiles and -3, -2, -1, 0, +1, +2, +3 SD values and curves of weight-for-age, length/height-for-age and head circumference-for-age for boys and girls aged 0-7 years were made out respectively. The Chinese child growth charts was slightly higher than the WHO child growth standards. The newly established growth charts represented the growth level of healthy and well-nourished Chinese children. The sample size was very large and national, the data were high-quality and the smoothing method was internationally accepted. The new Chinese growth charts are recommended as the Chinese child growth standards in 21st century used in China.
Temperature-based death time estimation with only partially known environmental conditions.
Mall, Gita; Eckl, Mona; Sinicina, Inga; Peschel, Oliver; Hubig, Michael
2005-07-01
The temperature-oriented death time determination is based on mathematical model curves of postmortem rectal cooling. All mathematical models require knowledge of the environmental conditions. In medico-legal practice homicide is sometimes not immediately suspected at the death scene but afterwards during external examination of the body. The environmental temperature at the death scene remains unknown or can only be roughly reconstructed. In such cases the question arises whether it is possible to estimate the time since death from rectal temperature data alone recorded over a longer time span. The present study theoretically deduces formulae which are independent of the initial and environmental temperatures and thus proves that the information needed for death time estimation is contained in the rectal temperature data. Since the environmental temperature at the death scene may differ from that during the temperature recording, an additional factor has to be used. This is that the body core is thermally well isolated from the environment and that the rectal temperature decrease after a sudden change of environmental temperature will continue for some time at a rate similar to that before the sudden change. The present study further provides a curve-fitting procedure for such scenarios. The procedure was tested in rectal cooling data of from 35 corpses using the most commonly applied model of Henssge. In all cases the time of death was exactly known. After admission to the medico-legal institute the bodies were kept at a constant environmental temperature for 12-36 h and the rectal temperatures were recorded continuously. The curve-fitting procedure led to valid estimates of the time since death in all experiments despite the unknown environmental conditions before admission to the institute. The estimation bias was investigated statistically. The 95% confidence intervals amounted to +/-4 h, which seems reasonable compared to the 95% confidence intervals of the Henssge model with known environmental temperature. The presented method may be of use for determining the time since death even in cases in which the environmental temperature and rectal temperature at the death scene have unintentionally not been recorded.
2017-11-01
sent from light-emitting diodes (LEDs) of 5 colors ( green , red, white, amber, and blue). Experiment 1 involved controlled laboratory measurements of...A-4 Red LED calibration curves and quadratic curve fits with R2 values . 37 Fig. A-5 Green LED calibration curves and quadratic curve fits with R2...36 Table A-4 Red LED calibration measurements ................................................... 36 Table A-5 Green LED
Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece
Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.
2008-10-14
Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.
Wu, Yiping; Zhao, Xiaohua; Chen, Chen; He, Jiayuan; Rong, Jian; Ma, Jianming
2016-10-01
In China, the Chevron alignment sign on highways is a vertical rectangle with a white arrow and border on a blue background, which differs from its counterpart in other countries. Moreover, little research has been devoted to the effectiveness of China's Chevron signs; there is still no practical method to quantitatively describe the impact of Chevron signs on driver performance in roadway curves. In this paper, a driving simulator experiment collected data on the driving performance of 30 young male drivers as they navigated on 29 different horizontal curves under different conditions (presence of Chevron signs, curve radius and curve direction). To address the heterogeneity issue in the data, three models were estimated and tested: a pooled data linear regression model, a fixed effects model, and a random effects model. According to the Hausman Test and Akaike Information Criterion (AIC), the random effects model offers the best fit. The current study explores the relationship between driver performance (i.e., vehicle speed and lane position) and horizontal curves with respect to the horizontal curvature, presence of Chevron signs, and curve direction. This study lays a foundation for developing procedures and guidelines that would allow more uniform and efficient deployment of Chevron signs on China's highways. Copyright © 2016 Elsevier Ltd. All rights reserved.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
Joosen, Ronny V L; Kodde, Jan; Willems, Leo A J; Ligterink, Wilco; van der Plas, Linus H W; Hilhorst, Henk W M
2010-04-01
Over the past few decades seed physiology research has contributed to many important scientific discoveries and has provided valuable tools for the production of high quality seeds. An important instrument for this type of research is the accurate quantification of germination; however gathering cumulative germination data is a very laborious task that is often prohibitive to the execution of large experiments. In this paper we present the germinator package: a simple, highly cost-efficient and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The germinator package contains three modules: (i) design of experimental setup with various options to replicate and randomize samples; (ii) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (iii) curve fitting of cumulative germination data and the extraction, recap and visualization of the various germination parameters. The curve-fitting module enables analysis of general cumulative germination data and can be used for all plant species. We show that the automatic scoring system works for Arabidopsis thaliana and Brassica spp. seeds, but is likely to be applicable to other species, as well. In this paper we show the accuracy, reproducibility and flexibility of the germinator package. We have successfully applied it to evaluate natural variation for salt tolerance in a large population of recombinant inbred lines and were able to identify several quantitative trait loci for salt tolerance. Germinator is a low-cost package that allows the monitoring of several thousands of germination tests, several times a day by a single person.
Determination of tire cross-sectional geometric characteristics from a digitally scanned image
NASA Astrophysics Data System (ADS)
Danielson, Kent T.
1995-08-01
A semi-automated procedure is described for the accurate determination of geometrical characteristics using a scanned image of the tire cross-section. The procedure can be useful for cases when CAD drawings are not available or when a description of the actual cured tire is desired. Curves representing the perimeter of the tire cross-section are determined by an edge tracing scheme, and the plyline and cord-end positions are determined by locations of color intensities. The procedure provides an accurate description of the perimeter of the tire cross-section and the locations of plylines and cord-ends. The position, normals, and curvatures of the cross-sectional surface are included in this description. The locations of the plylines provide the necessary information for determining the ply thicknesses and relative position to a reference surface. Finally, the locations of the cord-ends provide a means to calculate the cord-ends per inch (epi). Menu driven software has been developed to facilitate the procedure using the commercial code, PV-Wave by Visual Numerics, Inc., to display the images. From a single user interface, separate modules are executed for image enhancement, curve fitting the edge trace of the cross-sectional perimeter, and determining the plyline and cord-end locations. The code can run on SUN or SGI workstations and requires the use of a mouse to specify options or identify items on the scanned image.
Determination of tire cross-sectional geometric characteristics from a digitally scanned image
NASA Technical Reports Server (NTRS)
Danielson, Kent T.
1995-01-01
A semi-automated procedure is described for the accurate determination of geometrical characteristics using a scanned image of the tire cross-section. The procedure can be useful for cases when CAD drawings are not available or when a description of the actual cured tire is desired. Curves representing the perimeter of the tire cross-section are determined by an edge tracing scheme, and the plyline and cord-end positions are determined by locations of color intensities. The procedure provides an accurate description of the perimeter of the tire cross-section and the locations of plylines and cord-ends. The position, normals, and curvatures of the cross-sectional surface are included in this description. The locations of the plylines provide the necessary information for determining the ply thicknesses and relative position to a reference surface. Finally, the locations of the cord-ends provide a means to calculate the cord-ends per inch (epi). Menu driven software has been developed to facilitate the procedure using the commercial code, PV-Wave by Visual Numerics, Inc., to display the images. From a single user interface, separate modules are executed for image enhancement, curve fitting the edge trace of the cross-sectional perimeter, and determining the plyline and cord-end locations. The code can run on SUN or SGI workstations and requires the use of a mouse to specify options or identify items on the scanned image.
Robinson, B F; Mervis, C B
1998-03-01
The early lexical and grammatical development of 1 male child is examined with growth curves and dynamic-systems modeling procedures. Lexical-development described a pattern of logistic growth (R2 = .98). Lexical and plural development shared the following characteristics: Plural growth began only after a threshold was reached in vocabulary size; lexical growth slowed as plural growth increased. As plural use reached full mastery, lexical growth began again to increase. It was hypothesized that a precursor model (P. van Geert, 1991) would fit these data. Subsequent testing indicated that the precursor model, modified to incorporate brief yet intensive plural growth, provided a suitable fit. The value of the modified precursor model for the explication of processes implicated in language development is discussed.
Modal vector estimation for closely spaced frequency modes
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.; Blair, M.
1982-01-01
Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.
Atmospheric particulate analysis using angular light scattering
NASA Technical Reports Server (NTRS)
Hansen, M. Z.
1980-01-01
Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.
NASA Astrophysics Data System (ADS)
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less
NASA Astrophysics Data System (ADS)
Parolai, S.; Picozzi, M.; Richwalski, S. M.; Milkereit, C.
2005-01-01
Seismic noise contains information on the local S-wave velocity structure, which can be obtained from the phase velocity dispersion curve by means of array measurements. The H/V ratio from single stations also contains information on the average S-wave velocity and the total thickness of the sedimentary cover. A joint inversion of the two data sets therefore might allow constraining the final model well. We propose a scheme that does not require a starting model because of usage of a genetic algorithm. Furthermore, we tested two suitable cost functions for our data set, using a-priori and data driven weighting. The latter one was more appropriate in our case. In addition, we consider the influence of higher modes on the data sets and use a suitable forward modeling procedure. Using real data we show that the joint inversion indeed allows for better fitting the observed data than using the dispersion curve only.
Regionalisation of low flow frequency curves for the Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Mamun, Abdullah A.; Hashim, Alias; Daoud, Jamal I.
2010-02-01
SUMMARYRegional maps and equations for the magnitude and frequency of 1, 7 and 30-day low flows were derived and are presented in this paper. The river gauging stations of neighbouring catchments that produced similar low flow frequency curves were grouped together. As such, the Peninsular Malaysia was divided into seven low flow regions. Regional equations were developed using the multivariate regression technique. An empirical relationship was developed for mean annual minimum flow as a function of catchment area, mean annual rainfall and mean annual evaporation. The regional equations exhibited good coefficient of determination ( R2 > 0.90). Three low flow frequency curves showing the low, mean and high limits for each region were proposed based on a graphical best-fit technique. Knowing the catchment area, mean annual rainfall and evaporation in the region, design low flows of different durations can be easily estimated for the ungauged catchments. This procedure is expected to overcome the problem of data unavailability in estimating low flows in the Peninsular Malaysia.
NASA Astrophysics Data System (ADS)
Asal, Eren Karsu; Polymeris, George S.; Gultekin, Serdar; Kitis, George
2018-06-01
Thermoluminescence (TL) techniques are very useful in the research of the persistent Luminescence (PL) phosphors research. It gives information about the existence of energy levels within the forbidden band, its activation energy, kinetic order, lifetime etc. The TL glow curve of Sr4Al14O25 :Eu2+,Dy3+ persistent phosphor, consists of two well separated glow peaks. The TL techniques used to evaluate activation energy were the initial rise, prompt isothermal decay (PID) of TL of each peak at elevated temperatures and the glow - curve fitting. The behavior of the PID curves of the two peak is very different. According to the results of the PID procedure and the subsequent data analysis it is suggested that the mechanism behind the low temperature peak is a delocalized transition. On the other hand the mechanism behind the high temperature peak is localized transition involving a tunneling recombination between electron trap and luminescence center.
Methods for scalar-on-function regression.
Reiss, Philip T; Goldsmith, Jeff; Shang, Han Lin; Ogden, R Todd
2017-08-01
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images, etc. are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorizing the basic model types as linear, nonlinear and nonparametric. We discuss publicly available software packages, and illustrate some of the procedures by application to a functional magnetic resonance imaging dataset.
Biological growth functions describe published site index curves for Lake States timber species.
Allen L. Lundgren; William A. Dolid
1970-01-01
Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Losekamm, M. J.; Milde, M.; Pöschl, T.; Greenwald, D.; Paul, S.
2017-02-01
Traditional radiation detectors can either measure the total radiation dose omnidirectionally (dosimeters), or determine the incoming particles characteristics within a narrow field of view (spectrometers). Instantaneous measurements of anisotropic fluxes thus require several detectors, resulting in bulky setups. The Multi-purpose Active-target Particle Telescope (MAPT), employing a new detection principle, is designed to measure particle fluxes omnidirectionally and be simultaneously a dosimeter and spectrometer. It consists of an active core of scintillating fibers whose light output is measured by silicon photomultipliers, and fits into a cube with an edge length of 10 cm. It identifies particles using extended Bragg curve spectroscopy, with sensitivity to charged particles with kinetic energies above 25 MeV. MAPT's unique layout results in a geometrical acceptance of approximately 800 cm2 sr and an angular resolution of less than 6°, which can be improved by track-fitting procedures. In a beam test of a simplified prototype, the energy resolution was found to be less than 1 MeV for protons with energies between 30 and 70 MeV. Possible applications of MAPT include the monitoring of radiation environments in spacecraft and beam monitoring in medical facilities.
Promoting convergence: The Phi spiral in abduction of mouse corneal behaviors
Rhee, Jerry; Nejad, Talisa Mohammad; Comets, Olivier; Flannery, Sean; Gulsoy, Eine Begum; Iannaccone, Philip; Foster, Craig
2015-01-01
Why do mouse corneal epithelial cells display spiraling patterns? We want to provide an explanation for this curious phenomenon by applying an idealized problem solving process. Specifically, we applied complementary line-fitting methods to measure transgenic epithelial reporter expression arrangements displayed on three mature, live enucleated globes to clarify the problem. Two prominent logarithmic curves were discovered, one of which displayed the ϕ ratio, an indicator of an optimal configuration in phyllotactic systems. We then utilized two different computational approaches to expose our current understanding of the behavior. In one procedure, which involved an isotropic mechanics-based finite element method, we successfully produced logarithmic spiral curves of maximum shear strain based pathlines but computed dimensions displayed pitch angles of 35° (ϕ spiral is ∼17°), which was altered when we fitted the model with published measurements of coarse collagen orientations. We then used model-based reasoning in context of Peircean abduction to select a working hypothesis. Our work serves as a concise example of applying a scientific habit of mind and illustrates nuances of executing a common method to doing integrative science. © 2014 Wiley Periodicals, Inc. Complexity 20: 22–38, 2015 PMID:25755620
A new calibration code for the JET polarimeter.
Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E
2010-05-01
An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Magalhaes, A. M.; Coyne, G. V.
1995-01-01
We study the dust in the Small Magellanic Cloud using our polarization and extinction data (Paper 1) and existing dust models. The data suggest that the monotonic SMC extinction curve is related to values of lambda(sub max), the wavelength of maximum polarization, which are on the average smaller than the mean for the Galaxy. On the other hand, AZV 456, a star with an extinction similar to that for the Galaxy, shows a value of lambda(sub max) similar to the mean for the Galaxy. We discuss simultaneous dust model fits to extinction and polarization. Fits to the wavelength dependent polarization data are possible for stars with small lambda(sub max). In general, they imply dust size distributions which are narrower and have smaller mean sizes compared to typical size distributions for the Galaxy. However, stars with lambda(sub max) close to the Galactic norm, which also have a narrower polarization curve, cannot be fit adequately. This holds true for all of the dust models considered. The best fits to the extinction curves are obtained with a power law size distribution by assuming that the cylindrical and spherical silicate grains have a volume distribution which is continuous from the smaller spheres to the larger cylinders. The size distribution for the cylinders is taken from the fit to the polarization. The 'typical', monotonic SMC extinction curve can be fit well with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grain. However, amorphous carbon and silicate grains also fit the data well. AZV456, which has an extinction curve similar to that for the Galaxy, has a UV bump which is too blue to be fit by spherical graphite grains.
Modelling Schumann resonances from ELF measurements using non-linear optimization methods
NASA Astrophysics Data System (ADS)
Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo
2017-04-01
Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.
NASA Astrophysics Data System (ADS)
Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu
2018-03-01
In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.
Soil water retention and maximum capillary drive from saturation to oven dryness
Morel-Seytoux, Hubert J.; Nimmo, John R.
1999-01-01
This paper provides an alternative method to describe the water retention curve over a range of water contents from saturation to oven dryness. It makes two modifications to the standard Brooks and Corey [1964] (B-C) description, one at each end of the suction range. One expression proposed by Rossi and Nimmo [1994] is used in the high-suction range to a zero residual water content. (This Rossi-Nimmo modification to the Brooks-Corey model provides a more realistic description of the retention curve at low water contents.) Near zero suction the second modification eliminates the region where there is a change in suction with no change in water content. Tests on seven soil data sets, using three distinct analytical expressions for the high-, medium-, and low-suction ranges, show that the experimental water retention curves are well fitted by this composite procedure. The high-suction range of saturation contributes little to the maximum capillary drive, defined with a good approximation for a soil water and air system as HcM = ∫0∞ Krwdhc , where krw is relative permeability (or conductivity) to water and hc is capillary suction, a positive quantity in unsaturated soils. As a result, the modification suggested to describe the high-suction range does not significantly affect the equivalence between Brooks-Corey (B-C) and van Genuchten [1980] parameters presented earlier. However, the shape of the retention curve near “natural saturation” has a significant impact on the value of the capillary drive. The estimate using the Brooks-Corey power law, extended to zero suction, will exceed that obtained with the new procedure by 25 to 30%. It is not possible to tell which procedure is appropriate. Tests on another data set, for which relative conductivity data are available, support the view of the authors that measurements of a retention curve coupled with a speculative curve of relative permeability as from a capillary model are not sufficient to accurately determine the (maximum) capillary drive. The capillary drive is a dynamic scalar, whereas the retention curve is of a static character. Only measurements of infiltration rates with time can determine the capillary drive with precision for a given soil.
Methodology for the AutoRegressive Planet Search (ARPS) Project
NASA Astrophysics Data System (ADS)
Feigelson, Eric; Caceres, Gabriel; ARPS Collaboration
2018-01-01
The detection of periodic signals of transiting exoplanets is often impeded by the presence of aperiodic photometric variations. This variability is intrinsic to the host star in space-based observations (typically arising from magnetic activity) and from observational conditions in ground-based observations. The most common statistical procedures to remove stellar variations are nonparametric, such as wavelet decomposition or Gaussian Processes regression. However, many stars display variability with autoregressive properties, wherein later flux values are correlated with previous ones. Providing the time series is evenly spaced, parametric autoregressive models can prove very effective. Here we present the methodology of the Autoregessive Planet Search (ARPS) project which uses Autoregressive Integrated Moving Average (ARIMA) models to treat a wide variety of stochastic short-memory processes, as well as nonstationarity. Additionally, we introduce a planet-search algorithm to detect periodic transits in the time-series residuals after application of ARIMA models. Our matched-filter algorithm, the Transit Comb Filter (TCF), replaces the traditional box-fitting step. We construct a periodogram based on the TCF to concentrate the signal of these periodic spikes. Various features of the original light curves, the ARIMA fits, the TCF periodograms, and folded light curves at peaks of the TCF periodogram can then be collected to provide constraints for planet detection. These features provide input into a multivariate classifier when a training set is available. The ARPS procedure has been applied NASA's Kepler mission observations of ~200,000 stars (Caceres, Dissertation Talk, this meeting) and will be applied in the future to other datasets.
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.
On the reduction of occultation light curves. [stellar occultations by planets
NASA Technical Reports Server (NTRS)
Wasserman, L.; Veverka, J.
1973-01-01
The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.
Adediran, S A; Ratkowsky, D A; Donaghy, D J; Malau-Aduli, A E O
2012-09-01
Fourteen lactation models were fitted to average and individual cow lactation data from pasture-based dairy systems in the Australian states of Victoria and Tasmania. The models included a new "log-quadratic" model, and a major objective was to evaluate and compare the performance of this model with the other models. Nine empirical and 5 mechanistic models were first fitted to average test-day milk yield of Holstein-Friesian dairy cows using the nonlinear procedure in SAS. Two additional semiparametric models were fitted using a linear model in ASReml. To investigate the influence of days to first test-day and the number of test-days, 5 of the best-fitting models were then fitted to individual cow lactation data. Model goodness of fit was evaluated using criteria such as the residual mean square, the distribution of residuals, the correlation between actual and predicted values, and the Wald-Wolfowitz runs test. Goodness of fit was similar in all but one of the models in terms of fitting average lactation but they differed in their ability to predict individual lactations. In particular, the widely used incomplete gamma model most displayed this failing. The new log-quadratic model was robust in fitting average and individual lactations, and was less affected by sampled data and more parsimonious in having only 3 parameters, each of which lends itself to biological interpretation. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Navascues, M. A.; Sebastian, M. V.
Fractal interpolants of Barnsley are defined for any continuous function defined on a real compact interval. The uniform distance between the function and its approximant is bounded in terms of the vertical scale factors. As a general result, the density of the affine fractal interpolation functions of Barnsley in the space of continuous functions in a compact interval is proved. A method of data fitting by means of fractal interpolation functions is proposed. The procedure is applied to the quantification of cognitive brain processes. In particular, the increase in the complexity of the electroencephalographic signal produced by the execution of a test of visual attention is studied. The experiment was performed on two types of children: a healthy control group and a set of children diagnosed with an attention deficit disorder.
An Apparatus for Sizing Particulate Matter in Solid Rocket Motors.
1984-06-01
accurately measured. A curve for sizing polydispersions was presented which was used by Cramer and Hansen [Refs. 2, 12]. Two phase flow losses are often...Concentration...... 54 18. 5 Micron Polystyrene, Curve Fit .......... ... 55 19. 5 Micron Polystyrene, Two Angle Method ........ .56.... 20. 10 Micron...Polystyrene, Curve Fit .. ........ 57....[57 21. 10 Micron Polystyrene, Two Angle Method .. ....... .58 . . .6_ *22. 20J Mizron P3iystvrene Cu. .Fi
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolov, M.A.; Nanstad, R.K.
1999-10-01
The current provisions for determination of the upward temperature shift of the lower-bound static fracture toughness curve due to irradiation of reactor pressure vessel steels are based on the assumption that they are the same as the Charpy 41-J shifts as a consequence of irradiation. The objective of this paper is to evaluate this assumption relative to data reported in open publications. Depending on the specific source, different sizes of fracture toughness specimens, procedures of the K{sub Jc} determination, and fitting functions were used. It was anticipated that the scatter might be reduced by using a consistent approach to analyzemore » the published data. A method employing Weibull statistics is applied to analyze original fracture toughness data of unirradiated and irradiated pressure vessel steels. Application of the master curve concept is used to determine shifts of fracture toughness transition curves. A hyperbolic tangent function is used to fit charpy absorbed energy data. The fracture toughness shifts are compared to Charpy impact shifts evaluated with various criteria. Linear regression analysis showed that for weld metals, on average, the fracture toughness shift is the same as the Charpy 41-J temperature shift, while for base metals, on average, the fracture toughness shift at 41 J is 16% greater than the shift of the Charpy 41-J transition temperature, with both correlations having relatively large 95% confidence intervals.« less
NASA Astrophysics Data System (ADS)
Espinoza, Néstor; Jordán, Andrés
2016-04-01
Very precise measurements of exoplanet transit light curves both from ground- and space-based observatories make it now possible to fit the limb-darkening coefficients in the transit-fitting procedure rather than fix them to theoretical values. This strategy has been shown to give better results, as fixing the coefficients to theoretical values can give rise to important systematic errors which directly impact the physical properties of the system derived from such light curves such as the planetary radius. However, studies of the effect of limb-darkening assumptions on the retrieved parameters have mostly focused on the widely used quadratic limb-darkening law, leaving out other proposed laws that are either simpler or better descriptions of model intensity profiles. In this work, we show that laws such as the logarithmic, square-root and three-parameter law do a better job that the quadratic and linear laws when deriving parameters from transit light curves, both in terms of bias and precision, for a wide range of situations. We therefore recommend to study which law to use on a case-by-case basis. We provide code to guide the decision of when to use each of these laws and select the optimal one in a mean-square error sense, which we note has a dependence on both stellar and transit parameters. Finally, we demonstrate that the so-called exponential law is non-physical as it typically produces negative intensities close to the limb and should therefore not be used.
Brain size growth in wild and captive chimpanzees (Pan troglodytes).
Cofran, Zachary
2018-05-24
Despite many studies of chimpanzee brain size growth, intraspecific variation is under-explored. Brain size data from chimpanzees of the Taï Forest and the Yerkes Primate Research Center enable a unique glimpse into brain growth variation as age at death is known for individuals, allowing cross-sectional growth curves to be estimated. Because Taï chimpanzees are from the wild but Yerkes apes are captive, potential environmental effects on neural development can also be explored. Previous research has revealed differences in growth and health between wild and captive primates, but such habitat effects have yet to be investigated for brain growth. Here, I use an iterative curve fitting procedure to estimate brain growth and regression parameters for each population, statistically comparing growth models using bootstrapped confidence intervals. Yerkes and Taï brain sizes overlap at all ages, although the sole Taï newborn is at the low end of captive neonatal variation. Growth rate and duration are statistically indistinguishable between the two populations. Resampling the Yerkes sample to match the Taï sample size and age group composition shows that ontogenetic variation in the two groups are remarkably similar despite the latter's limited size. Best fit growth curves for each sample indicate cessation of brain size growth at around 2 years, earlier than has previously been reported. The overall similarity between wild and captive chimpanzees points to the canalization of brain growth in this species. © 2018 Wiley Periodicals, Inc.
Lo, Po-Han; Tsou, Mei-Yung; Chang, Kuang-Yi
2015-09-01
Patient-controlled epidural analgesia (PCEA) is commonly used for pain relief after total knee arthroplasty (TKA). This study aimed to model the trajectory of analgesic demand over time after TKA and explore its influential factors using latent curve analysis. Data were retrospectively collected from 916 patients receiving unilateral or bilateral TKA and postoperative PCEA. PCEA demands during 12-hour intervals for 48 hours were directly retrieved from infusion pumps. Potentially influential factors of PCEA demand, including age, height, weight, body mass index, sex, and infusion pump settings, were also collected. A latent curve analysis with 2 latent variables, the intercept (baseline) and slope (trend), was applied to model the changes in PCEA demand over time. The effects of influential factors on these 2 latent variables were estimated to examine how these factors interacted with time to alter the trajectory of PCEA demand over time. On average, the difference in analgesic demand between the first and second 12-hour intervals was only 15% of that between the first and third 12-hour intervals. No significant difference in PCEA demand was noted between the third and fourth 12-hour intervals. Aging tended to decrease the baseline PCEA demand but body mass index and infusion rate were positively correlated with the baseline. Only sex significantly affected the trend parameter and male individuals tended to have a smoother decreasing trend of analgesic demands over time. Patients receiving bilateral procedures did not consume more analgesics than their unilateral counterparts. Goodness of fit analysis indicated acceptable model fit to the observed data. Latent curve analysis provided valuable information about how analgesic demand after TKA changed over time and how patient characteristics affected its trajectory.
Space-Based Observation Technology
2000-10-01
Conan, V. Michau, and S. Salem . Regularized multiframe myopic deconvolution from wavefront sensing. In Propagation through the Atmosphere III...specified false alarm rate PFA . Proceeding with curving fitting, one obtains a best-fit curve “10.1y14.2 - 0.2” as the detector for the target
Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel
2012-01-01
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C
2017-08-01
Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over time.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Białek, Marianna
2015-05-01
Physiotherapy for stabilization of idiopathic scoliosis angle in growing children remains controversial. Notably, little data on effectiveness of physiotherapy in children with Early Onset Idiopathic Scoliosis (EOIS) has been published.The aim of this study was to check results of FITS physiotherapy in a group of children with EOIS.The charts of the patients archived in a prospectively collected database were retrospectively reviewed. The inclusion criteria were:diagnosis of EOIS based on spine radiography, age below 10 years, both girls and boys, Cobb angle between 118 and 308, Risser zero, FITS therapy, no other treatment (bracing), and a follow-up at least 2 years from the initiation of the treatment. The criterion for curve progression were as follows: the Cobb angle increase of 68 or more, for curve stabilization; the Cobb angle was 58 comparing to the initial radiograph,for curve correction; and the Cobb angle decrease of 68 or more at the final follow-up radiograph.There were 41 children with EOIS, 36 girls and 5 boys, mean age 7.71.3 years (range 4 to 9 years) who started FITS therapy. The curve pattern was single thoracic (5 children), single thoracolumbar (22 children) or double thoracic/thoracolumbar (14 children), totally 55 structural curvatures. The minimum follow-up was 2 years after initiation of the FITS treatment, maximum was 16 years, mean 4.8 years). At follow-up the mean age was 12.53.4 years. Out of 41 children, 10 passed pubertal growth spurt at the final follow-up and 31 were still immature and continued FITS therapy. Out of 41 children, 27 improved, 13 were stable, and one progressed. Out of 55 structural curves, 32 improved, 22 were stable and one progressed. For the 55 structural curves, the Cobb angle significantly decreased from 18.085.48 at first assessment to 12.586.38 at last evaluation,p<0.0001, paired t-test. The angle of trunk rotation decreased significantly from 4.782.98 to 3.282.58 at last evaluation, p<0.0001,paired t-test.FITS physiotherapy was effective in preventing curve progression in children with EOIS. Final postpubertal follow-up data is needed.
Learning curves for urological procedures: a systematic review.
Abboudi, Hamid; Khan, Mohammed Shamim; Guru, Khurshid A; Froghi, Saied; de Win, Gunter; Van Poppel, Hendrik; Dasgupta, Prokar; Ahmed, Kamran
2014-10-01
To determine the number of cases a urological surgeon must complete to achieve proficiency for various urological procedures. The MEDLINE, EMBASE and PsycINFO databases were systematically searched for studies published up to December 2011. Studies pertaining to learning curves of urological procedures were included. Two reviewers independently identified potentially relevant articles. Procedure name, statistical analysis, procedure setting, number of participants, outcomes and learning curves were analysed. Forty-four studies described the learning curve for different urological procedures. The learning curve for open radical prostatectomy ranged from 250 to 1000 cases and for laparoscopic radical prostatectomy from 200 to 750 cases. The learning curve for robot-assisted laparoscopic prostatectomy (RALP) has been reported to be 40 procedures as a minimum number. Robot-assisted radical cystectomy has a documented learning curve of 16-30 cases, depending on which outcome variable is measured. Irrespective of previous laparoscopic experience, there is a significant reduction in operating time (P = 0.008), estimated blood loss (P = 0.008) and complication rates (P = 0.042) after 100 RALPs. The available literature can act as a guide to the learning curves of trainee urologists. Although the learning curve may vary among individual surgeons, a consensus should exist for the minimum number of cases to achieve proficiency. The complexities associated with defining procedural competence are vast. The majority of learning curve trials have focused on the latest surgical techniques and there is a paucity of data pertaining to basic urological procedures. © 2013 The Authors. BJU International © 2013 BJU International.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378
Numerical computation of viscous flow around bodies and wings moving at supersonic speeds
NASA Technical Reports Server (NTRS)
Tannehill, J. C.
1984-01-01
Research in aerodynamics is discussed. The development of equilibrium air curve fits; computation of hypersonic rarefield leading edge flows; computation of 2-D and 3-D blunt body laminar flows with an impinging shock; development of a two-dimensional or axisymmetric real gas blunt body code; a study of an over-relaxation procedure forthe MacCormack finite-difference scheme; computation of 2-D blunt body turbulent flows with an impinging shock; computation of supersonic viscous flow over delta wings at high angles of attack; and computation of the Space Shuttle Orbiter flowfield are discussed.
Dust in the Small Magellanic Cloud
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Coyne, G. V.; Magalhaes, A. M.
1995-01-01
We discuss simultaneous dust model fits to our extinction and polarization data for the Small Magellanic Cloud (SMC) using existing dust models. Dust model fits to the wavelength dependent polarization are possible for stars with small lambda(sub max). They generally imply size distributions which are narrower and have smaller average sizes compared to those in the Galaxy. The best fits for the extinction curves are obtained with a power law size distribution. The typical, monotonic SMC extinction curve can be well fit with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grains. Amorphous carbon and silicate grains also fit the data well.
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
Milky Way Kinematics. II. A Uniform Inner Galaxy H I Terminal Velocity Curve
NASA Astrophysics Data System (ADS)
McClure-Griffiths, N. M.; Dickey, John M.
2016-11-01
Using atomic hydrogen (H I) data from the VLA Galactic Plane Survey, we measure the H I terminal velocity as a function of longitude for the first quadrant of the Milky Way. We use these data, together with our previous work on the fourth Galactic quadrant, to produce a densely sampled, uniformly measured, rotation curve of the northern and southern Milky Way between 3 {kpc}\\lt R\\lt 8 {kpc}. We determine a new joint rotation curve fit for the first and fourth quadrants, which is consistent with the fit we published in McClure-Griffiths & Dickey and can be used for estimating kinematic distances interior to the solar circle. Structure in the rotation curves is now exquisitely well defined, showing significant velocity structure on lengths of ˜200 pc, which is much greater than the spatial resolution of the rotation curve. Furthermore, the shape of the rotation curves for the first and fourth quadrants, even after subtraction of a circular rotation fit shows a surprising degree of correlation with a roughly sinusoidal pattern between 4.2\\lt R\\lt 7 kpc.
ERIC Educational Resources Information Center
Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-05-01
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
Stream-temperature characteristics in Georgia
Dyar, T.R.; Alhadeff, S. Jack
1997-01-01
Stream-temperature measurements for 198 periodic and 22 daily record stations were analyzed using a harmonic curve-fitting procedure. Statistics of data from 78 selected stations were used to compute a statewide stream-temperature harmonic equation, derived using latitude, drainage area, and altitude for natural streams having drainage areas greater than about 40 square miles. Based on the 1955-84 reference period, the equation may be used to compute long-term natural harmonic stream-temperature coefficients to within an on average of about 0.4? C. Basin-by-basin summaries of observed long-term stream-temperature characteristics are included for selected stations and river reaches, particularly along Georgia's mainstem streams. Changes in the stream- temperature regimen caused by the effects of development, principally impoundments and thermal power plants, are shown by comparing harmonic curves and coefficients from the estimated natural values to the observed modified-condition values.
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
Photographic photometry with Iris diaphragm photometers
NASA Technical Reports Server (NTRS)
Schaefer, B. E.
1981-01-01
A general method is presented for solving problems encountered in the analysis of Iris diaphragm photometer (IDP) data. The method is used to derive the general shape of the calibration curve, allowing both a more accurate fit to the IDP data for comparison stars and extrapolation to magnitude ranges for which no comparison stars are measured. The profile of starlight incident and the characteristic curve of the plate are both assumed and then used to derive the profile of the star image. An IDP reading is then determined for each star image. A procedure for correcting the effects of a nonconstant background fog level on the plate is also demonstrated. Additional applications of the method are made in the appendix to determine the relation between the radius of a photographic star image and the star's magnitude, and to predict the IDP reading of the 'point of optimum density'.
Rapid learning curve assessment in an ex vivo training system for microincisional glaucoma surgery.
Dang, Yalong; Waxman, Susannah; Wang, Chao; Parikh, Hardik A; Bussel, Igor I; Loewen, Ralitsa T; Xia, Xiaobo; Lathrop, Kira L; Bilonick, Richard A; Loewen, Nils A
2017-05-09
Increasing prevalence and cost of glaucoma have increased the demand for surgeons well trained in newer, microincisional surgery. These procedures occur in a highly confined space, making them difficult to learn by observation or assistance alone as is currently done. We hypothesized that our ex vivo outflow model is sensitive enough to allow computing individual learning curves to quantify progress and refine techniques. Seven trainees performed nine trabectome-mediated ab interno trabeculectomies in pig eyes (n = 63). An expert surgeon rated the procedure using an Operating Room Score (ORS). The extent of outflow beds accessed was measured with canalograms. Data was fitted using mixed effect models. ORS reached a half-maximum on an asymptote after only 2.5 eyes. Surgical time decreased by 1.4 minutes per eye in a linear fashion. The ablation arc followed an asymptotic function with a half-maximum inflection point after 5.3 eyes. Canalograms revealed that this progress did not correlate well with improvement in outflow, suggesting instead that about 30 eyes are needed for true mastery. This inexpensive pig eye model provides a safe and effective microsurgical training model and allows objective quantification of outcomes for the first time.
Normalized inverse characterization of sound absorbing rigid porous media.
Zieliński, Tomasz G
2015-06-01
This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.
Intensity Conserving Spectral Fitting
NASA Technical Reports Server (NTRS)
Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.
2015-01-01
The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.
NASA Astrophysics Data System (ADS)
Daniel, D. Joseph; Ramasamy, P.; Ramaseshan, R.; Kim, H. J.; Kim, Sunghwan; Bhagavannarayana, G.; Cheon, Jong-Kyu
2017-10-01
Polycrystalline compounds of LiBaF3 were synthesized using conventional solid state reaction route and the phase purity was confirmed using powder X-ray diffraction technique. Using vertical Bridgman technique single crystal was grown from melt. Rocking curve measurements have been carried out to study the structural perfection of the grown crystal. The single peak of diffraction curve clearly reveals that the grown crystal was free from the structural grain boundaries. The low temperature thermoluminescence of the X-ray irradiated sample has been analyzed and found four distinguishable peaks having maximum temperatures at 18, 115, 133 and 216 K. Activation energy (E) and frequency factor (s) for the individual peaks have been studied using Peak shape method and the computerized curve fitting method combining with the Tmax- TStop procedure. Nanoindentation technique was employed to study the mechanical behaviour of the crystal. The indentation modulus and Vickers hardness of the grown crystal have values of 135.15 GPa and 680.81 respectively, under the maximum indentation load of 10 mN.
Díaz Alonso, Fernando; González Ferradás, Enrique; Sánchez Pérez, Juan Francisco; Miñana Aznar, Agustín; Ruiz Gimeno, José; Martínez Alonso, Jesús
2006-09-21
A number of models have been proposed to calculate overpressure and impulse from accidental industrial explosions. When the blast is produced by ignition of a vapour cloud, the TNO Multi-Energy model is widely used. From the curves given by this model, data are fitted to obtain equations showing the relationship between overpressure, impulse and distance. These equations, referred herein as characteristic curves, can be fitted by means of power equations, which depend on explosion energy and charge strength. Characteristic curves allow the determination of overpressure and impulse at each distance.
NASA Astrophysics Data System (ADS)
Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.
2018-02-01
An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.
NASA Astrophysics Data System (ADS)
Podder, M. S.; Majumder, C. B.
2016-11-01
The optimization of biosorption/bioaccumulation process of both As(III) and As(V) has been investigated by using the biosorbent; biofilm of Corynebacterium glutamicum MTCC 2745 supported on granular activated carbon/MnFe2O4 composite (MGAC). The presence of functional groups on the cell wall surface of the biomass that may interact with the metal ions was proved by FT-IR. To determine the most appropriate correlation for the equilibrium curves employing the procedure of the non-linear regression for curve fitting analysis, isotherm studies were performed for As(III) and As(V) using 30 isotherm models. The pattern of biosorption/bioaccumulation fitted well with Vieth-Sladek isotherm model for As(III) and Brouers-Sotolongo and Fritz-Schlunder-V isotherm models for As(V). The maximum biosorption/bioaccumulation capacity estimated using Langmuir model were 2584.668 mg/g for As(III) and 2651.675 mg/g for As(V) at 30 °C temperature and 220 min contact time. The results showed that As(III) and As(V) removal was strongly pH-dependent with an optimum pH value of 7.0. D-R isotherm studies specified that ion exchange might play a prominent role.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
A curve fitting method for solving the flutter equation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cooper, J. L.
1972-01-01
A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waszczak, Adam; Chang, Chan-Kao; Cheng, Yu-Chi
We fit 54,296 sparsely sampled asteroid light curves in the Palomar Transient Factory survey to a combined rotation plus phase-function model. Each light curve consists of 20 or more observations acquired in a single opposition. Using 805 asteroids in our sample that have reference periods in the literature, we find that the reliability of our fitted periods is a complicated function of the period, amplitude, apparent magnitude, and other light-curve attributes. Using the 805-asteroid ground-truth sample, we train an automated classifier to estimate (along with manual inspection) the validity of the remaining ∼53,000 fitted periods. By this method we findmore » that 9033 of our light curves (of ∼8300 unique asteroids) have “reliable” periods. Subsequent consideration of asteroids with multiple light-curve fits indicates a 4% contamination in these “reliable” periods. For 3902 light curves with sufficient phase-angle coverage and either a reliable fit period or low amplitude, we examine the distribution of several phase-function parameters, none of which are bimodal though all correlate with the bond albedo and with visible-band colors. Comparing the theoretical maximal spin rate of a fluid body with our amplitude versus spin-rate distribution suggests that, if held together only by self-gravity, most asteroids are in general less dense than ∼2 g cm{sup −3}, while C types have a lower limit of between 1 and 2 g cm{sup −3}. These results are in agreement with previous density estimates. For 5–20 km diameters, S types rotate faster and have lower amplitudes than C types. If both populations share the same angular momentum, this may indicate the two types’ differing ability to deform under rotational stress. Lastly, we compare our absolute magnitudes (and apparent-magnitude residuals) to those of the Minor Planet Center’s nominal (G = 0.15, rotation-neglecting) model; our phase-function plus Fourier-series fitting reduces asteroid photometric rms scatter by a factor of ∼3.« less
Modeling two strains of disease via aggregate-level infectivity curves.
Romanescu, Razvan; Deardon, Rob
2016-04-01
Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.
NASA Astrophysics Data System (ADS)
Ji, Zhong-Ye; Zhang, Xiao-Fang
2018-01-01
The mathematical relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam is important in beam quality control theory of the high-energy laser weapon system. In order to obtain this mathematical relation, numerical simulation is used in the research. Firstly, the Zernike representations of typically distorted atmospheric wavefront aberrations caused by the Kolmogoroff turbulence are generated. And then, the corresponding beam quality β factors of the different distorted wavefronts are calculated numerically through fast Fourier transform. Thus, the statistical distribution rule between the beam quality β factors of high-energy laser and the wavefront aberrations of the beam can be established by the calculated results. Finally, curve fitting method is chosen to establish the mathematical fitting relationship of these two parameters. And the result of the curve fitting shows that there is a quadratic curve relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam. And in this paper, 3 fitting curves, in which the wavefront aberrations are consisted of Zernike Polynomials of 20, 36, 60 orders individually, are established to express the relationship between the beam quality β factor and atmospheric wavefront aberrations with different spatial frequency.
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelini, G.; Lanza, E.; Rozza Dionigi, A.
1983-05-01
The measurement of cerebral blood flow (CBF) by the extracranial detection of the radioactivity of /sup 133/Xe injected into an internal carotid artery has proved to be of considerable value for the investigation of cerebral circulation in conscious rabbits. Methods are described for calculating CBF from the curves of clearance of /sup 133/Xe, and include exponential analysis (two-component model), initial slope, and stochastic method. The different methods of curve analysis were compared in order to evaluate the fitness with the theoretical model. The initial slope and stochastic methods, compared with the biexponential model, underestimate the CBF by 35% and 46%more » respectively. Furthermore, the validity of recording the clearance curve for 10 min was tested by comparing these CBF values with those obtained from the whole curve. CBF values calculated with the shortened procedure are overestimated by 17%. A correlation exists between the ''10 min'' CBF values and the CBF calculated from the whole curve; in spite of that, the values are not accurate for limited animal populations or for single animals. The extent of the two main compartments into which the CBF is divided was also measured. There is no correlation between CBF values and the extent of the relative compartment. This fact suggests that these two parameters correspond to different biological entities.« less
Establishing the Learning Curve of Robotic Sacral Colpopexy in a Start-up Robotics Program.
Sharma, Shefali; Calixte, Rose; Finamore, Peter S
2016-01-01
To determine the learning curve of the following segments of a robotic sacral colpopexy: preoperative setup, operative time, postoperative transition, and room turnover. A retrospective cohort study to determine the number of cases needed to reach points of efficiency in the various segments of a robotic sacral colpopexy (Canadian Task Force II-2). A university-affiliated community hospital. Women who underwent robotic sacral colpopexy at our institution from 2009 to 2013 comprise the study population. Patient characteristics and operative reports were extracted from a patient database that has been maintained since the inception of the robotics program at Winthrop University Hospital and electronic medical records. Based on additional procedures performed, 4 groups of patients were created (A-D). Learning curves for each of the segment times of interest were created using penalized basis spline (B-spline) regression. Operative time was further analyzed using an inverse curve and sequential grouping. A total of 176 patients were eligible. Nonparametric tests detected no difference in procedure times between the 4 groups (A-D) of patients. The preoperative and postoperative points of efficiency were 108 and 118 cases, respectively. The operative points of proficiency and efficiency were 25 and 36 cases, respectively. Operative time was further analyzed using an inverse curve that revealed that after 11 cases the surgeon had reached 90% of the learning plateau. Sequential grouping revealed no significant improvement in operative time after 60 cases. Turnover time could not be assessed because of incomplete data. There is a difference in the operative time learning curve for robotic sacral colpopexy depending on the statistical analysis used. The learning curve of the operative segment showed an improvement in operative time between 25 and 36 cases when using B-spline regression. When the data for operative time was fit to an inverse curve, a learning rate of 11 cases was appreciated. Using sequential grouping to describe the data, no improvement in operative time was seen after 60 cases. Ultimately, we believe that efficiency in operative time is attained after 30 to 60 cases when performing robotic sacral colpopexy. The learning curve for preoperative setup and postoperative transition, which is reflective of anesthesia and nursing staff, was approximately 110 cases. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.
MODELING GALACTIC EXTINCTION WITH DUST AND 'REAL' POLYCYCLIC AROMATIC HYDROCARBONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mulas, Giacomo; Casu, Silvia; Cecchi-Pestellini, Cesare
We investigate the remarkable apparent variety of galactic extinction curves by modeling extinction profiles with core-mantle grains and a collection of single polycyclic aromatic hydrocarbons. Our aim is to translate a synthetic description of dust into physically well-grounded building blocks through the analysis of a statistically relevant sample of different extinction curves. All different flavors of observed extinction curves, ranging from the average galactic extinction curve to virtually 'bumpless' profiles, can be described by the present model. We prove that a mixture of a relatively small number (54 species in 4 charge states each) of polycyclic aromatic hydrocarbons can reproducemore » the features of the extinction curve in the ultraviolet, dismissing an old objection to the contribution of polycyclic aromatic hydrocarbons to the interstellar extinction curve. Despite the large number of free parameters (at most the 54 Multiplication-Sign 4 column densities of each species in each ionization state included in the molecular ensemble plus the 9 parameters defining the physical properties of classical particles), we can strongly constrain some physically relevant properties such as the total number of C atoms in all species and the mean charge of the mixture. Such properties are found to be largely independent of the adopted dust model whose variation provides effects that are orthogonal to those brought about by the molecular component. Finally, the fitting procedure, together with some physical sense, suggests (but does not require) the presence of an additional component of chemically different very small carbonaceous grains.« less
NASA Astrophysics Data System (ADS)
Li, Xin; Tang, Li; Lin, Hai-Nan
2017-05-01
We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)
Barret, E; Sanchez-Salas, R; Ercolani, M; Forgues, A; Rozet, F; Galiano, M; Cathelineau, X
2011-06-01
The objective of this manuscript is to provide an evidence-based analysis of the current status and future perspectives of robotic laparoendoscopic single-site surgery (R-LESS). A PubMed search has been performed for all relevant urological literature regarding natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single-site surgery (LESS). All clinical and investigative reports for robotic LESS and NOTES procedures in the urological literature have been considered. A significant number of clinical urological procedures have been successfully completed utilizing R-LESS procedures. The available experience is limited to referral centers, where the case volume is sufficient to help overcome the challenges and learning curve of LESS surgery. The robotic interface remains the best fit for LESS procedures but its mode of use continues to evolve in attempts to improve surgical technique. We stand today at the dawn of R-LESS surgery, but this approach may well become the standard of care in the near future. Further technological development is needed to allow widespread adoption of the technique.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam
2014-01-01
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
On cyclic yield strength in definition of limits for characterisation of fatigue and creep behaviour
NASA Astrophysics Data System (ADS)
Gorash, Yevgen; MacKenzie, Donald
2017-06-01
This study proposes cyclic yield strength as a potential characteristic of safe design for structures operating under fatigue and creep conditions. Cyclic yield strength is defined on a cyclic stress-strain curve, while monotonic yield strength is defined on a monotonic curve. Both values of strengths are identified using a two-step procedure of the experimental stress-strain curves fitting with application of Ramberg-Osgood and Chaboche material models. A typical S-N curve in stress-life approach for fatigue analysis has a distinctive minimum stress lower bound, the fatigue endurance limit. Comparison of cyclic strength and fatigue limit reveals that they are approximately equal. Thus, safe fatigue design is guaranteed in the purely elastic domain defined by the cyclic yielding. A typical long-term strength curve in time-to-failure approach for creep analysis has two inflections corresponding to the cyclic and monotonic strengths. These inflections separate three domains on the long-term strength curve, which are characterised by different creep fracture modes and creep deformation mechanisms. Therefore, safe creep design is guaranteed in the linear creep domain with brittle failure mode defined by the cyclic yielding. These assumptions are confirmed using three structural steels for normal and high-temperature applications. The advantage of using cyclic yield strength for characterisation of fatigue and creep strength is a relatively quick experimental identification. The total duration of cyclic tests for a cyclic stress-strain curve identification is much less than the typical durations of fatigue and creep rupture tests at the stress levels around the cyclic yield strength.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
Graphic tracings of condylar paths and measurements of condylar angles.
el-Gheriani, A S; Winstanley, R B
1989-01-01
A study was carried out to determine the accuracy of different methods of measuring condylar inclination from graphical recordings of condylar paths. Thirty subjects made protrusive mandibular movements while condylar inclination was recorded on a graph paper card. A mandibular facebow and intraoral central bearing plate facilitated the procedure. The first method proved to be too variable to be of value in measuring condylar angles. The spline curve fitting technique was shown to be accurate, but its use clinically may prove complex. The mathematical method was more practical and overcame the variability of the tangent method. Other conclusions regarding condylar inclination are outlined.
Study of Dynamic Characteristics of Aeroelastic Systems Utilizing Randomdec Signatures
NASA Technical Reports Server (NTRS)
Chang, C. S.
1975-01-01
The feasibility of utilizing the random decrement method in conjunction with a signature analysis procedure to determine the dynamic characteristics of an aeroelastic system for the purpose of on-line prediction of potential on-set of flutter was examined. Digital computer programs were developed to simulate sampled response signals of a two-mode aeroelastic system. Simulated response data were used to test the random decrement method. A special curve-fit approach was developed for analyzing the resulting signatures. A number of numerical 'experiments' were conducted on the combined processes. The method is capable of determining frequency and damping values accurately from randomdec signatures of carefully selected lengths.
NASA Astrophysics Data System (ADS)
Reolon, David; Jacquot, Maxime; Verrier, Isabelle; Brun, Gérald; Veillas, Colette
2006-12-01
In this paper we propose group refractive index measurement with a spectral interferometric set-up using a broadband supercontinuum generated in an air-silica Microstructured Optical Fibre (MOF) pumped with a picosecond pulsed microchip laser. This source authorizes high fringes visibility for dispersion measurements by Spectroscopic Analysis of White Light Interferograms (SAWLI). Phase calculation is assumed by a wavelet transform procedure combined with a curve fit of the recorded channelled spectrum intensity. This approach provides high resolution and absolute group refractive index measurements along one line of the sample by recording a single 2D spectral interferogram without mechanical scanning.
A study of Lusitano mare lactation curve with Wood's model.
Santos, A S; Silvestre, A M
2008-02-01
Milk yield and composition data from 7 nursing Lusitano mares (450 to 580 kg of body weight and 2 to 9 parities) were used in this study (5 measurements per mare for milk yield and 8 measurements for composition). Wood's lactation model was used to describe milk fat, protein, and lactose lactation curves. Mean values for the concentration of major milk components across the lactation period (180 d) were 5.9 g/kg of fat, 18.4 g/kg of protein, and 60.8 g/kg of lactose. Milk fat and protein (g/kg) decreased and lactose (g/kg) increased during the 180 d of lactation. Curves for milk protein and lactose yields (g) were similar in shape to the milk yield curve; protein yield peaked at 307 g on d 10 and lactose peaked at 816 g on d 45. The fat (g) curve was different in shape compared with milk, protein, and lactose yields. Total production of the major milk constituents throughout the 180 d of lactation was estimated to be 12.0, 36.1, and 124 kg for fat, protein, and lactose, respectively. The algebraic model fitted by a nonlinear regression procedure to the data resulted in reasonable prediction curves for milk yield (R(a)(2) of 0.89) and the major constituents (R(a)(2) ranged from 0.89 to 0.95). The lactation curves of major milk constituents in Lusitano mares were similar, both in shape and values, to those found in other horse breeds. The established curves facilitate the estimation of milk yield and variation of milk constituents at different stages of lactation for both nursing and dairy mares, providing important information relative to weaning time and foal supplementation.
Methods for Performing Survival Curve Quality-of-Life Assessments.
Sumner, Walton; Ding, Eric; Fischer, Irene D; Hagen, Michael D
2014-08-01
Many medical decisions involve an implied choice between alternative survival curves, typically with differing quality of life. Common preference assessment methods neglect this structure, creating some risk of distortions. Survival curve quality-of-life assessments (SQLA) were developed from Gompertz survival curves fitting the general population's survival. An algorithm was developed to generate relative discount rate-utility (DRU) functions from a standard survival curve and health state and an equally attractive alternative curve and state. A least means squared distance algorithm was developed to describe how nearly 3 or more DRU functions intersect. These techniques were implemented in a program called X-Trade and tested. SQLA scenarios can portray realistic treatment choices. A side effect scenario portrays one prototypical choice, to extend life while experiencing some loss, such as an amputation. A risky treatment scenario portrays procedures with an initial mortality risk. A time trade scenario mimics conventional time tradeoffs. Each SQLA scenario yields DRU functions with distinctive shapes, such as sigmoid curves or vertical lines. One SQLA can imply a discount rate or utility if the other value is known and both values are temporally stable. Two SQLA exercises imply a unique discount rate and utility if the inferred DRU functions intersect. Three or more SQLA results can quantify uncertainty or inconsistency in discount rate and utility estimates. Pilot studies suggested that many subjects could learn to interpret survival curves and do SQLA. SQLA confuse some people. Compared with SQLA, standard gambles quantify very low utilities more easily, and time tradeoffs are simpler for high utilities. When discount rates approach zero, time tradeoffs are as informative and easier to do than SQLA. SQLA may complement conventional utility assessment methods. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan
2014-09-01
A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.
Hybrid Micro-Electro-Mechanical Tunable Filter
2007-09-01
Figure 2.10), one can see the developers have used surface micromachining techniques to build the micromirror structure over the CMOS addressing...DBRs, microcavity composition, initial air gap, contact layers, substrate Dispersion Data Curve -fit dispersion data or generate dispersion function...measurements • Curve -fit the dispersion data or generate a continuous, wavelength-dependent, representation of material dispersion • Manually design the
Consideration of Wear Rates at High Velocities
2010-03-01
Strain vs. Three-dimensional Model . . . . . . . . . . . . 57 3.11 Example Single Asperity Wear Rate Integral . . . . . . . . . . 58 4.1 Third Stage...Slipper Accumulated Frictional Heating . . . . . . 67 4.2 Surface Temperature Third Stage Slipper, ave=0.5 . . . . . . . 67 4.3 Melt Depth Example...64 A3S Coefficient for Frictional Heat Curve Fit, Third Stage Slipper 66 B3S Coefficient for Frictional Heat Curve Fit, Third
Using quasars as standard clocks for measuring cosmological redshift.
Dai, De-Chang; Starkman, Glenn D; Stojkovic, Branislav; Stojkovic, Dejan; Weltman, Amanda
2012-06-08
We report hitherto unnoticed patterns in quasar light curves. We characterize segments of the quasar's light curves with the slopes of the straight lines fit through them. These slopes appear to be directly related to the quasars' redshifts. Alternatively, using only global shifts in time and flux, we are able to find significant overlaps between the light curves of different pairs of quasars by fitting the ratio of their redshifts. We are then able to reliably determine the redshift of one quasar from another. This implies that one can use quasars as standard clocks, as we explicitly demonstrate by constructing two independent methods of finding the redshift of a quasar from its light curve.
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
Crystal structure optimisation using an auxiliary equation of state
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron
2015-11-01
Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.
Surface fitting three-dimensional bodies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1974-01-01
The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.
ERIC Educational Resources Information Center
Ferrer, Emilio; Hamagami, Fumiaki; McArdle, John J.
2004-01-01
This article offers different examples of how to fit latent growth curve (LGC) models to longitudinal data using a variety of different software programs (i.e., LISREL, Mx, Mplus, AMOS, SAS). The article shows how the same model can be fitted using both structural equation modeling and multilevel software, with nearly identical results, even in…
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2009-01-01
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
NASA Astrophysics Data System (ADS)
Gentile, G.; Famaey, B.; de Blok, W. J. G.
2011-03-01
We present an analysis of 12 high-resolution galactic rotation curves from The HI Nearby Galaxy Survey (THINGS) in the context of modified Newtonian dynamics (MOND). These rotation curves were selected to be the most reliable for mass modelling, and they are the highest quality rotation curves currently available for a sample of galaxies spanning a wide range of luminosities. We fit the rotation curves with the "simple" and "standard" interpolating functions of MOND, and we find that the "simple" function yields better results. We also redetermine the value of a0, and find a median value very close to the one determined in previous studies, a0 = (1.22 ± 0.33) × 10-8 cm s-2. Leaving the distance as a free parameter within the uncertainty of its best independently determined value leads to excellent quality fits for 75% of the sample. Among the three exceptions, two are also known to give relatively poor fits in Newtonian dynamics plus dark matter. The remaining case (NGC 3198) presents some tension between the observations and the MOND fit, which might, however, be explained by the presence of non-circular motions, by a small distance, or by a value of a0 at the lower end of our best-fit interval, 0.9 × 10-8 cm s-2. The best-fit stellar M/L ratios are generally in remarkable agreement with the predictions of stellar population synthesis models. We also show that the narrow range of gravitational accelerations found to be generated by dark matter in galaxies is consistent with the narrow range of additional gravity predicted by MOND.
Eng, Kevin H; Schiller, Emily; Morrell, Kayla
2015-11-03
Researchers developing biomarkers for cancer prognosis from quantitative gene expression data are often faced with an odd methodological discrepancy: while Cox's proportional hazards model, the appropriate and popular technique, produces a continuous and relative risk score, it is hard to cast the estimate in clear clinical terms like median months of survival and percent of patients affected. To produce a familiar Kaplan-Meier plot, researchers commonly make the decision to dichotomize a continuous (often unimodal and symmetric) score. It is well known in the statistical literature that this procedure induces significant bias. We illustrate the liabilities of common techniques for categorizing a risk score and discuss alternative approaches. We promote the use of the restricted mean survival (RMS) and the corresponding RMS curve that may be thought of as an analog to the best fit line from simple linear regression. Continuous biomarker workflows should be modified to include the more rigorous statistical techniques and descriptive plots described in this article. All statistics discussed can be computed via standard functions in the Survival package of the R statistical programming language. Example R language code for the RMS curve is presented in the appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaczmarski, Krzysztof; Guiochon, Georges A
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventionalmore » procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.« less
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
1996-01-01
Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.
Podder, M S; Majumder, C B
2016-11-05
The optimization of biosorption/bioaccumulation process of both As(III) and As(V) has been investigated by using the biosorbent; biofilm of Corynebacterium glutamicum MTCC 2745 supported on granular activated carbon/MnFe2O4 composite (MGAC). The presence of functional groups on the cell wall surface of the biomass that may interact with the metal ions was proved by FT-IR. To determine the most appropriate correlation for the equilibrium curves employing the procedure of the non-linear regression for curve fitting analysis, isotherm studies were performed for As(III) and As(V) using 30 isotherm models. The pattern of biosorption/bioaccumulation fitted well with Vieth-Sladek isotherm model for As(III) and Brouers-Sotolongo and Fritz-Schlunder-V isotherm models for As(V). The maximum biosorption/bioaccumulation capacity estimated using Langmuir model were 2584.668mg/g for As(III) and 2651.675mg/g for As(V) at 30°C temperature and 220min contact time. The results showed that As(III) and As(V) removal was strongly pH-dependent with an optimum pH value of 7.0. D-R isotherm studies specified that ion exchange might play a prominent role. Copyright © 2016 Elsevier B.V. All rights reserved.
Robotic Mitral Valve Repair: The Learning Curve.
Goodman, Avi; Koprivanac, Marijan; Kelava, Marta; Mick, Stephanie L; Gillinov, A Marc; Rajeswaran, Jeevanantham; Brzezinski, Anna; Blackstone, Eugene H; Mihaljevic, Tomislav
Adoption of robotic mitral valve surgery has been slow, likely in part because of its perceived technical complexity and a poorly understood learning curve. We sought to correlate changes in technical performance and outcome with surgeon experience in the "learning curve" part of our series. From 2006 to 2011, two surgeons undertook robotically assisted mitral valve repair in 458 patients (intent-to-treat); 404 procedures were completed entirely robotically (as-treated). Learning curves were constructed by modeling surgical sequence number semiparametrically with flexible penalized spline smoothing best-fit curves. Operative efficiency, reflecting technical performance, improved for (1) operating room time for case 1 to cases 200 (early experience) and 400 (later experience), from 414 to 364 to 321 minutes (12% and 22% decrease, respectively), (2) cardiopulmonary bypass time, from 148 to 102 to 91 minutes (31% and 39% decrease), and (3) myocardial ischemic time, from 119 to 75 to 68 minutes (37% and 43% decrease). Composite postoperative complications, reflecting safety, decreased from 17% to 6% to 2% (63% and 85% decrease). Intensive care unit stay decreased from 32 to 28 to 24 hours (13% and 25% decrease). Postoperative stay fell from 5.2 to 4.5 to 3.8 days (13% and 27% decrease). There were no in-hospital deaths. Predischarge mitral regurgitation of less than 2+, reflecting effectiveness, was achieved in 395 (97.8%), without correlation to experience; return-to-work times did not change substantially with experience. Technical efficiency of robotic mitral valve repair improves with experience and permits its safe and effective conduct.
2017-01-01
Purpose. The purpose of this study is to evaluate the learning curve of performing surgery with the InterTan intramedullary nail in treating femoral intertrochanteric fractures, to provide valuable information and experience for surgeons who decide to learn a new procedure. Methods. We retrospectively analyzed data from 53 patients who underwent surgery using an InterTan intramedullary nail at our hospital between July 2012 and September 2015. The negative exponential curve-fit regression analysis was used to evaluate the learning curve. According to 90% learning milestone, patients were divided into two group, and the outcomes were compared. Results. The mean operative time was 69.28 (95% CI 64.57 to 74.00) minutes; with the accumulation of surgical experience, the operation time was gradually decreased. 90% of the potential improvement was expected after 18 cases. In terms of operative time, intraoperative blood loss, hospital stay, and Harris hip score significant differences were found between two groups (p = 0.009, p = 0.000, p = 0.030, and p = 0.002, resp.). Partial weight bearing time, fracture union time, tip apex distance, and the number of blood transfusions and complications were similar between two groups (p > 0.5). Conclusion. This study demonstrated that the learning curve of performing surgery with the InterTan intramedullary nail is acceptable and 90% of the expert's proficiency level is achieved at around 18 cases. PMID:28503572
Phenomenological study of the ionisation density-dependence of TLD-100 peak 5a.
Brandan, Maria-Ester; Angeles, Oscar; Mercado-Uribe, Hilda
2006-01-01
Horowitz and collaborators have reported evidence on the structure of TLD-100 peak 5. A satellite peak, called 5a, has been singled out as arising from localised electron-hole recombination in a trap/luminescent centre, its emission mechanism would be geminate recombination and, therefore, its population would depend on incident radiation ionisation density. We report a phenomenological study of peak 4, 5a and 5 strengths for glow curves previously measured at UNAM for gammas, electrons and low-energy ions. The deconvolution procedure has followed strict rules to assure that the glow curve, where the presence of peak 5a is not visually noticeable, is decomposed in a consistent fashion, maintaining fixed widths and relative temperature difference between all the peaks. We find no improvement in the quality of the fit after inclusion of peak 5a. The relative contribution of peak 5a with respect to peak 5 does not seem to correlate with the radiation linear energy transfer.
Ademi, Abdulakim; Grozdanov, Anita; Paunović, Perica; Dimitrov, Aleksandar T
2015-01-01
Summary A model consisting of an equation that includes graphene thickness distribution is used to calculate theoretical 002 X-ray diffraction (XRD) peak intensities. An analysis was performed upon graphene samples produced by two different electrochemical procedures: electrolysis in aqueous electrolyte and electrolysis in molten salts, both using a nonstationary current regime. Herein, the model is enhanced by a partitioning of the corresponding 2θ interval, resulting in significantly improved accuracy of the results. The model curves obtained exhibit excellent fitting to the XRD intensities curves of the studied graphene samples. The employed equation parameters make it possible to calculate the j-layer graphene region coverage of the graphene samples, and hence the number of graphene layers. The results of the thorough analysis are in agreement with the calculated number of graphene layers from Raman spectra C-peak position values and indicate that the graphene samples studied are few-layered. PMID:26665083
Validation and application of single breath cardiac output determinations in man
NASA Technical Reports Server (NTRS)
Loeppky, J. A.; Fletcher, E. R.; Myhre, L. G.; Luft, U. C.
1986-01-01
The results of a procedure for estimating cardiac output by a single-breath technique (Qsb), obtained in healthy males during supine rest and during exercise on a bicycle ergometer, were compared with the results on cardiac output obtained by the direct Fick method (QF). The single breath maneuver consisted of a slow exhalation to near residual volume following an inspiration somewhat deeper than normal. The Qsb calculations incorporated an equation of the CO2 dissociation curve and a 'moving spline' sequential curve-fitting technique to calculate the instantaneous R from points on the original expirogram. The resulting linear regression equation indicated a 24-percent underestimation of QF by the Qsb technique. After applying a correction, the Qsb-QF relationship was improved. A subsequent study during upright rest and exercise to 80 percent of VO2(max) in 6 subjects indicated a close linear relationship between Qsb and VO2 for all 95 values obtained, with slope and intercept close to those in published studies in which invasive cardiac output measurements were used.
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1986-01-01
Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.
Costa, Paulo R; Caldas, Linda V E
2002-01-01
This work presents the development and evaluation using modern techniques to calculate radiation protection barriers in clinical radiographic facilities. Our methodology uses realistic primary and scattered spectra. The primary spectra were computer simulated using a waveform generalization and a semiempirical model (the Tucker-Barnes-Chakraborty model). The scattered spectra were obtained from published data. An analytical function was used to produce attenuation curves from polychromatic radiation for specified kVp, waveform, and filtration. The results of this analytical function are given in ambient dose equivalent units. The attenuation curves were obtained by application of Archer's model to computer simulation data. The parameters for the best fit to the model using primary and secondary radiation data from different radiographic procedures were determined. They resulted in an optimized model for shielding calculation for any radiographic room. The shielding costs were about 50% lower than those calculated using the traditional method based on Report No. 49 of the National Council on Radiation Protection and Measurements.
On the derivation of flow rating curves in data-scarce environments
NASA Astrophysics Data System (ADS)
Manfreda, Salvatore
2018-07-01
River monitoring is a critical issue for hydrological modelling that relies strongly on the use of flow rating curves (FRCs). In most cases, these functions are derived by least-squares fitting which usually leads to good performance indices, even when based on a limited range of data that especially lack high flow observations. In this context, cross-section geometry is a controlling factor which is not fully exploited in classical approaches. In fact, river discharge is obtained as the product of two factors: 1) the area of the wetted cross-section and 2) the cross-sectionally averaged velocity. Both factors can be expressed as a function of the river stage, defining a viable alternative in the derivation of FRCs. This makes it possible to exploit information about cross-section geometry limiting, at least partially, the uncertainty in the extrapolation of discharge at higher flow values. Numerical analyses and field data confirm the reliability of the proposed procedure for the derivation of FRCs.
MacDonald, Gordon A; Veneman, P Alexander; Placencia, Diogenes; Armstrong, Neal R
2012-11-27
We demonstrate mapping of electrical properties of heterojunctions of a molecular semiconductor (copper phthalocyanine, CuPc) and a transparent conducting oxide (indium-tin oxide, ITO), on 20-500 nm length scales, using a conductive-probe atomic force microscopy technique, scanning current spectroscopy (SCS). SCS maps are generated for CuPc/ITO heterojunctions as a function of ITO activation procedures and modification with variable chain length alkyl-phosphonic acids (PAs). We correlate differences in small length scale electrical properties with the performance of organic photovoltaic cells (OPVs) based on CuPc/C(60) heterojunctions, built on these same ITO substrates. SCS maps the "ohmicity" of ITO/CuPc heterojunctions, creating arrays of spatially resolved current-voltage (J-V) curves. Each J-V curve is fit with modified Mott-Gurney expressions, mapping a fitted exponent (γ), where deviations from γ = 2.0 suggest nonohmic behavior. ITO/CuPc/C(60)/BCP/Al OPVs built on nonactivated ITO show mainly nonohmic SCS maps and dark J-V curves with increased series resistance (R(S)), lowered fill-factors (FF), and diminished device performance, especially near the open-circuit voltage. Nearly optimal behavior is seen for OPVs built on oxygen-plasma-treated ITO contacts, which showed SCS maps comparable to heterojunctions of CuPc on clean Au. For ITO electrodes modified with PAs there is a strong correlation between PA chain length and the degree of ohmicity and uniformity of electrical response in ITO/CuPc heterojunctions. ITO electrodes modified with 6-8 carbon alkyl-PAs show uniform and nearly ohmic SCS maps, coupled with acceptable CuPc/C(60)OPV performance. ITO modified with C14 and C18 alkyl-PAs shows dramatic decreases in FF, increases in R(S), and greatly enhanced recombination losses.
Atwood, E.L.
1958-01-01
Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.
Doona, Christopher J; Feeherry, Florence E; Ross, Edward W
2005-04-15
Predictive microbial models generally rely on the growth of bacteria in laboratory broth to approximate the microbial growth kinetics expected to take place in actual foods under identical environmental conditions. Sigmoidal functions such as the Gompertz or logistics equation accurately model the typical microbial growth curve from the lag to the stationary phase and provide the mathematical basis for estimating parameters such as the maximum growth rate (MGR). Stationary phase data can begin to show a decline and make it difficult to discern which data to include in the analysis of the growth curve, a factor that influences the calculated values of the growth parameters. In contradistinction, the quasi-chemical kinetics model provides additional capabilities in microbial modelling and fits growth-death kinetics (all four phases of the microbial lifecycle continuously) for a general set of microorganisms in a variety of actual food substrates. The quasi-chemical model is differential equations (ODEs) that derives from a hypothetical four-step chemical mechanism involving an antagonistic metabolite (quorum sensing) and successfully fits the kinetics of pathogens (Staphylococcus aureus, Escherichia coli and Listeria monocytogenes) in various foods (bread, turkey meat, ham and cheese) as functions of different hurdles (a(w), pH, temperature and anti-microbial lactate). The calculated value of the MGR depends on whether growth-death data or only growth data are used in the fitting procedure. The quasi-chemical kinetics model is also exploited for use with the novel food processing technology of high-pressure processing. The high-pressure inactivation kinetics of E. coli are explored in a model food system over the pressure (P) range of 207-345 MPa (30,000-50,000 psi) and the temperature (T) range of 30-50 degrees C. In relatively low combinations of P and T, the inactivation curves are non-linear and exhibit a shoulder prior to a more rapid rate of microbial destruction. In the higher P, T regime, the inactivation plots tend to be linear. In all cases, the quasi-chemical model successfully fit the linear and curvi-linear inactivation plots for E. coli in model food systems. The experimental data and the quasi-chemical mathematical model described herein are candidates for inclusion in ComBase, the developing database that combines data and models from the USDA Pathogen Modeling Program and the UK Food MicroModel.
Development of a program to fit data to a new logistic model for microbial growth.
Fujikawa, Hiroshi; Kano, Yoshihiro
2009-06-01
Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.
Zhu, Mingping; Chen, Aiqing
2017-01-01
This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580
A Multi-year Multi-passband CCD Photometric Study of the W UMa Binary EQ Tauri
NASA Astrophysics Data System (ADS)
Alton, K. B.
2009-12-01
A revised ephemeris and updated orbital period for EQ Tau have been determined from newly acquired (2007-2009) CCD-derived photometric data. A Roche-type model based on the Wilson-Devinney code produced simultaneous theoretical fits of light curve data in three passbands by invoking cold spots on the primary component. These new model fits, along with similar light curve data for EQ Tau collected during the previous six seasons (2000-2006), provided a rare opportunity to follow the seasonal appearance of star spots on a W UMa binary system over nine consecutive years. Fixed values for q, ?1,2, T1, T2, and i based upon the mean of eleven separately determined model fits produced for this system are hereafter proposed for future light curve modeling of EQ Tau. With the exception of the 2001 season all other light curves produced since then required a spotted solution to address the flux asymmetry exhibited by this binary system at Max I and Max II. At least one cold spot on the primary appears in seven out of twelve light curves for EQ Tau produced over the last nine years, whereas in six instances two cold spots on the primary star were invoked to improve the model fit. Solutions using a hot spot were less common and involved positioning a single spot on the primary constituent during the 2001-2002, 2002-2003, and 2005-2006 seasons.
Method and apparatus for air-coupled transducer
NASA Technical Reports Server (NTRS)
Song, Junho (Inventor); Chimenti, Dale E. (Inventor)
2010-01-01
An air-coupled transducer includes a ultrasonic transducer body having a radiation end with a backing fixture at the radiation end. There is a flexible backplate conformingly fit to the backing fixture and a thin membrane (preferably a metallized polymer) conformingly fit to the flexible backplate. In one embodiment, the backing fixture is spherically curved and the flexible backplate is spherically curved. The flexible backplate is preferably patterned with pits or depressions.
Tromberg, B.J.; Tsay, T.T.; Berns, M.W.; Svaasand, L.O.; Haskell, R.C.
1995-06-13
Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid. 14 figs.
Tromberg, Bruce J.; Tsay, Tsong T.; Berns, Michael W.; Svaasand, Lara O.; Haskell, Richard C.
1995-01-01
Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid.
Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj
2017-01-01
Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.
Fitting Photometry of Blended Microlensing Events
NASA Astrophysics Data System (ADS)
Thomas, Christian L.; Griest, Kim
2006-03-01
We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.
Zhou, Wu
2014-01-01
The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846
NASA Astrophysics Data System (ADS)
Salim, Samir; Boquien, Médéric; Lee, Janice C.
2018-05-01
We study the dust attenuation curves of 230,000 individual galaxies in the local universe, ranging from quiescent to intensely star-forming systems, using GALEX, SDSS, and WISE photometry calibrated on the Herschel ATLAS. We use a new method of constraining SED fits with infrared luminosity (SED+LIR fitting), and parameterized attenuation curves determined with the CIGALE SED-fitting code. Attenuation curve slopes and UV bump strengths are reasonably well constrained independently from one another. We find that {A}λ /{A}V attenuation curves exhibit a very wide range of slopes that are on average as steep as the curve slope of the Small Magellanic Cloud (SMC). The slope is a strong function of optical opacity. Opaque galaxies have shallower curves—in agreement with recent radiative transfer models. The dependence of slopes on the opacity produces an apparent dependence on stellar mass: more massive galaxies have shallower slopes. Attenuation curves exhibit a wide range of UV bump amplitudes, from none to Milky Way (MW)-like, with an average strength one-third that of the MW bump. Notably, local analogs of high-redshift galaxies have an average curve that is somewhat steeper than the SMC curve, with a modest UV bump that can be, to first order, ignored, as its effect on the near-UV magnitude is 0.1 mag. Neither the slopes nor the strengths of the UV bump depend on gas-phase metallicity. Functional forms for attenuation laws are presented for normal star-forming galaxies, high-z analogs, and quiescent galaxies. We release the catalog of associated star formation rates and stellar masses (GALEX–SDSS–WISE Legacy Catalog 2).
Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2
NASA Astrophysics Data System (ADS)
Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.
2017-12-01
In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.
Measurements and modeling of long-path 12CH4 spectra in the 4800-5300 cm-1 region
NASA Astrophysics Data System (ADS)
Nikitin, A. V.; Thomas, X.; Régalia, L.; Daumont, L.; Rey, M.; Tashkun, S. A.; Tyuterev, Vl. G.; Brown, L. R.
2014-05-01
A new study of 12CH4 line positions and intensities was performed for the lower portion of the Tetradecad region between 4800 and 5300 cm-1 using long path (1603 m) spectra of normal sample CH4 at three pressures recorded with the Fourier transform spectrometer in Reims, France. Line positions and intensities were retrieved by least square curve-fitting procedures and analyzed using the effective Hamiltonian and the effective Dipole moment expressed in terms of irreducible tensor operators adapted to spherical top molecules. An existing spectrum of enriched 13CH4 was used to discern the isotopic lines. A new measured linelist produced positions and intensities for 5851 features (a factor of two more than prior work). Assignments were made for 46% of these; 2725 experimental line positions and 1764 selected line intensities were fitted with RMS standard deviations of 0.004 cm-1 and 7.3%, respectively. The RMS of prior intensity fits of the lower Tetradecad was previously a factor of two worse. The sum of observed intensities between 4800 and 5300 cm-1 fell within 5% of the predicted value from variational calculations.
Text vectorization based on character recognition and character stroke modeling
NASA Astrophysics Data System (ADS)
Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao
2014-03-01
In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.
Horn, P L; Neil, H L; Paul, L J; Marriott, P
2010-11-01
Age validation of bluenose Hyperoglyphe antarctica was sought using the independent bomb chronometer procedure. Radiocarbon ((14) C) levels were measured in core micro-samples from 12 otoliths that had been aged using a zone count method. The core (14) C measurement for each fish was compared with the value on a surface water reference curve for the calculated birth year of the fish. There was good agreement, indicating that the line-count ageing method described here is not substantially biased. A second micro-sample was also taken near the edge of nine of the otolith cross-sections to help define a bomb-carbon curve for waters deeper than 200-300 m. There appears to be a 10 to 15 year lag in the time it takes the (14) C to reach the waters where adult H. antarctica are concentrated. The maximum estimated age of this species was 76 years, and females grow significantly larger than males. Von Bertalanffy growth curves were estimated, and although they fit the available data reasonably well, the lack of aged juvenile fish results in the K and t(0) parameters being biologically meaningless. Consequently, curves that are likely to better represent population growth were estimated by forcing t(0) to be -0·5. © 2010 NIWA. Journal of Fish Biology © 2010 The Fisheries Society of the British Isles.
Sperlich, Alexander; Werner, Arne; Genz, Arne; Amy, Gary; Worch, Eckhard; Jekel, Martin
2005-03-01
Breakthrough curves (BTC) for the adsorption of arsenate and salicylic acid onto granulated ferric hydroxide (GFH) in fixed-bed adsorbers were experimentally determined and modeled using the homogeneous surface diffusion model (HSDM). The input parameters for the HSDM, the Freundlich isotherm constants and mass transfer coefficients for film and surface diffusion, were experimentally determined. The BTC for salicylic acid revealed a shape typical for trace organic compound adsorption onto activated carbon, and model results agreed well with the experimental curves. Unlike salicylic acid, arsenate BTCs showed a non-ideal shape with a leveling off at c/c0 approximately 0.6. Model results based on the experimentally derived parameters over-predicted the point of arsenic breakthrough for all simulated curves, lab-scale or full-scale, and were unable to catch the shape of the curve. The use of a much lower surface diffusion coefficient D(S) for modeling led to an improved fit of the later stages of the BTC shape, pointing on a time-dependent D(S). The mechanism for this time dependence is still unknown. Surface precipitation was discussed as one possible removal mechanism for arsenate besides pure adsorption interfering the determination of Freundlich constants and D(S). Rapid small-scale column tests (RSSCT) proved to be a powerful experimental alternative to the modeling procedure for arsenic.
SEEK: A FORTRAN optimization program using a feasible directions gradient search
NASA Technical Reports Server (NTRS)
Savage, M.
1995-01-01
This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G
2004-02-01
Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.
Kanter, Michael H; Huang, Yii-Chieh; Kally, Zina; Gordon, Margo A; Meltzer, Charles
2018-06-01
A well-documented association exists between higher surgeon volumes and better outcomes for many procedures, but surgeons may be reluctant to change practice patterns without objective, credible, and near real-time data on their performance. In addition, published thresholds for procedure volumes may be biased or perceived as arbitrary; typical reports compare surgeons grouped into discrete procedure volume categories, even though the volume-outcomes relationship is likely continuous. The concentration curves methodology, which has been used to analyze whether health outcomes vary with socioeconomic status, was adapted to explore the association between procedure volume and outcomes as a continuous relationship so that data for all surgeons within a health care organization could be included. Using widely available software and requiring minimal analytic expertise, this approach plots cumulative percentages of two variables of interest against each other and assesses the characteristics of the resulting curve. Organization-specific relationships between surgeon volumes and outcomes were examined for three example types of procedures: uncomplicated hysterectomies, infant circumcisions, and total thyroidectomies. The concentration index was used to assess whether outcomes were equally distributed unrelated to volumes. For all three procedures, the concentration curve methodology identified associations between surgeon procedure volumes and selected outcomes that were specific to the organization. The concentration indices confirmed the higher prevalence of examined outcomes among low-volume surgeons. The curves supported organizational discussions about surgical quality. Concentration curves require minimal resources to identify organization- and procedure-specific relationships between surgeon procedure volumes and outcomes and can support quality improvement. Copyright © 2018 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Observational evidence of dust evolution in galactic extinction curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecchi-Pestellini, Cesare; Casu, Silvia; Mulas, Giacomo
Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds.more » Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.« less
UTM, a universal simulator for lightcurves of transiting systems
NASA Astrophysics Data System (ADS)
Deeg, Hans
2009-02-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. Applications of UTM to date have been mainly in the generation of light-curves for the testing of detection algorithms. For the preparation of such test for the Corot Mission, a special version has been used to generate multicolour light-curves in Corot's passbands. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
The effect of semirigid dressings on below-knee amputations.
MacLean, N; Fick, G H
1994-07-01
The effect of using semirigid dressings (SRDs) on the residual limb of individuals who have had below-knee amputations as a consequence of peripheral vascular disease was investigated, with the primary question being: Does the time to readiness for prosthetic fitting for patients treated with the SRDs differ from that of patients treated with soft dressings? Forty patients entered the study and were alternately assigned to one of two groups. Nineteen patients were assigned to the SRD group, and 21 patients were assigned to the soft dressing group. The time from surgery to readiness for prosthetic fitting was recorded for each patient. Kaplan-Meier survival curves were generated for each group, and the results were analyzed with the log-rank test. There was a difference between the two curves, and an examination of the curves suggests that the expected time to readiness for prosthetic fitting for patients treated with the SRDs would be less than half that of patients treated with soft dressings. The results suggest that a patient may be ready for prosthetic fitting sooner if treated with SRDs instead of soft dressings.
Derivation of error sources for experimentally derived heliostat shapes
NASA Astrophysics Data System (ADS)
Cumpston, Jeff; Coventry, Joe
2017-06-01
Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.
Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark
2014-06-01
The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.
Mattucci, Stephen F E; Cronin, Duane S
2015-01-01
Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
Forgetting Curves: Implications for Connectionist Models
ERIC Educational Resources Information Center
Sikstrom, Sverker
2002-01-01
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…
Nonlinear Growth Models in M"plus" and SAS
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam
2009-01-01
Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…
On the Early-Time Excess Emission in Hydrogen-Poor Superluminous Supernovae
NASA Technical Reports Server (NTRS)
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; De Cia, Annalisa; Perley, Daniel A.; Quimby, Robert M.; Waldman, Roni; Sullivan, Mark; Yan, Lin; Ofek, Eran O.;
2017-01-01
We present the light curves of the hydrogen-poor super-luminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (approximately 10 days) and brightness relative to the main peak (23 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (greater than 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.
On The Early-Time Excess Emission In Hydrogen-Poor Superluminous Supernovae
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; ...
2017-01-18
Here, we present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (~10 days) and brightness relative to the main peak (2-3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration ( > 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of amore » different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less
ON THE EARLY-TIME EXCESS EMISSION IN HYDROGEN-POOR SUPERLUMINOUS SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay
2017-01-20
We present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (∼10 days) and brightness relative to the main peak (2–3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (>30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. Wemore » construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of {sup 56}Ni and {sup 56}Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less
The relationship between offspring size and fitness: integrating theory and empiricism.
Rollinson, Njal; Hutchings, Jeffrey A
2013-02-01
How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
On the Methodology of Studying Aging in Humans
1961-01-01
prediction of death rates The relation of death rate to age has been extensively studied for over 100 years. As an illustration recent death rates for...log death rates appear to be linear, the simpler Gompertz curve fits closely. While on this subject of the Makeham-Gompertz function, it should be...Makeham-Gompertz curve to 5 year age specific death rates . Each fitting provided estimates of the parameters a, {j, and log c for each of the five year
Moderate Levels of Activation Lead to Forgetting In the Think/No-Think Paradigm
Detre, Greg J.; Natarajan, Annamalai; Gershman, Samuel J.; Norman, Kenneth A.
2013-01-01
Using the think/no-think paradigm (Anderson & Green, 2001), researchers have found that suppressing retrieval of a memory (in the presence of a strong retrieval cue) can make it harder to retrieve that memory on a subsequent test. This effect has been replicated numerous times, but the size of the effect is highly variable. Also, it is unclear from a neural mechanistic standpoint why preventing recall of a memory now should impair your ability to recall that memory later. Here, we address both of these puzzles using the idea, derived from computational modeling and studies of synaptic plasticity, that the function relating memory activation to learning is U-shaped, such that moderate levels of memory activation lead to weakening of the memory and higher levels of activation lead to strengthening. According to this view, forgetting effects in the think/no-think paradigm occur when the suppressed item activates moderately during the suppression attempt, leading to weakening; the effect is variable because sometimes the suppressed item activates strongly (leading to strengthening) and sometimes it does not activate at all (in which case no learning takes place). To test this hypothesis, we ran a think/no-think experiment where participants learned word-picture pairs; we used pattern classifiers, applied to fMRI data, to measure how strongly the picture associates were activating when participants were trying not to retrieve these associates, and we used a novel Bayesian curve-fitting procedure to relate this covert neural measure of retrieval to performance on a later memory test. In keeping with our hypothesis, the curve-fitting procedure revealed a nonmonotonic relationship between memory activation (as measured by the classifier) and subsequent memory, whereby moderate levels of activation of the to-be-suppressed item led to diminished performance on the final memory test, and higher levels of activation led to enhanced performance on the final test. PMID:23499722
Ghosn, Marwan; Ibrahim, Tony; El Rassy, Elie; Nassani, Najib; Ghanem, Sassine; Assi, Tarek
2017-03-01
Comprehensive geriatric assessment (CGA) is a complex and interdisciplinary approach to evaluate the health status of elderly patients. The Karnofsky Performance Scale (KPS) and Physical Performance Test (PPT) are less time-consuming tools that measure functional status. This study was designed to assess and compare abridged geriatric assessment (GA), KPS and PPT as predictive tools of mortality in elderly patients with cancer. This prospective interventional study included all individuals aged >70years who were diagnosed with cancer during the study period. Subjects were interviewed directly using a procedure that included a clinical test and a questionnaire composed of the KPS, PPT and abridged GCA. Overall survival (OS) was the primary endpoint. The log rank test was used to compare survival curves, and Cox's regression model (forward procedure) was used for multivariate survival analysis. One hundred patients were included in this study. Abridged GA was the only tool found to predict mortality [median OS for unfit patients (at least two impairments) 467days vs 1030days for fit patients; p=0.04]. Patients defined as fit by mean PPT score (>20) had worse median OS (560 vs 721days); however, this difference was not significant (p=0.488 on log rank). Although median OS did not differ significantly between patients with low (≤80) and high (>80) KPS scores (467 and 795days, respectively; p=0.09), survival curves diverged after nearly 120days of follow-up. Visual and hearing impairments were the only components of abridged GA of prognostic value. Neither KPS nor PPT were shown to predict mortality in elderly patients with cancer whereas abridged GA was predictive. This study suggests a possible role for visual and hearing assessment as screening for patients requiring CGA. Copyright © 2016 Elsevier Ltd. All rights reserved.
Moderate levels of activation lead to forgetting in the think/no-think paradigm.
Detre, Greg J; Natarajan, Annamalai; Gershman, Samuel J; Norman, Kenneth A
2013-10-01
Using the think/no-think paradigm (Anderson & Green, 2001), researchers have found that suppressing retrieval of a memory (in the presence of a strong retrieval cue) can make it harder to retrieve that memory on a subsequent test. This effect has been replicated numerous times, but the size of the effect is highly variable. Also, it is unclear from a neural mechanistic standpoint why preventing recall of a memory now should impair your ability to recall that memory later. Here, we address both of these puzzles using the idea, derived from computational modeling and studies of synaptic plasticity, that the function relating memory activation to learning is U-shaped, such that moderate levels of memory activation lead to weakening of the memory and higher levels of activation lead to strengthening. According to this view, forgetting effects in the think/no-think paradigm occur when the suppressed item activates moderately during the suppression attempt, leading to weakening; the effect is variable because sometimes the suppressed item activates strongly (leading to strengthening) and sometimes it does not activate at all (in which case no learning takes place). To test this hypothesis, we ran a think/no-think experiment where participants learned word-picture pairs; we used pattern classifiers, applied to fMRI data, to measure how strongly the picture associates were activating when participants were trying not to retrieve these associates, and we used a novel Bayesian curve-fitting procedure to relate this covert neural measure of retrieval to performance on a later memory test. In keeping with our hypothesis, the curve-fitting procedure revealed a nonmonotonic relationship between memory activation (as measured by the classifier) and subsequent memory, whereby moderate levels of activation of the to-be-suppressed item led to diminished performance on the final memory test, and higher levels of activation led to enhanced performance on the final test. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo
2018-03-01
Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.
Oscar, T P
2008-09-01
Mapping the number and distribution of Salmonella on poultry carcasses will help guide better design of processing procedures to reduce or eliminate this human pathogen from poultry. A selective plating media with multiple antibiotics (xylose-lysine agar medium [XL] containing N-(2-hydroxyethyl)piperazine-N'-(2-ethanesulfonic acid) and the antibiotics chloramphenicol, ampicillin, tetracycline, and streptomycin [XLH-CATS]) and a multiple-antibiotic-resistant strain (ATCC 700408) of Salmonella Typhimurium definitive phage type 104 (DT104) were used to develop an enumeration method for mapping the number and distribution of Salmonella Typhimurium DT104 on the carcasses of young chickens in the Cornish game hen class. The enumeration method was based on the concept that the time to detection by drop plating on XLH-CATS during incubation of whole chicken parts in buffered peptone water would be inversely related to the initial log number (N0) of Salmonella Typhimurium DT104 on the chicken part. The sampling plan for mapping involved dividing the chicken into 12 parts, which ranged in average size from 36 to 80 g. To develop the enumeration method, whole parts were spot inoculated with 0 to 6 log Salmonella Typhimurium DT104, incubated in 300 ml of buffered peptone water, and detected on XLH-CATS by drop plating. An inverse relationship between detection time on XLH-CATS and N0 was found (r = -0.984). The standard curve was similar for the individual chicken parts and therefore, a single standard curve for all 12 chicken parts was developed. The final standard curve, which contained a 95% prediction interval for providing stochastic results for N0, had high goodness of fit (r2 = 0.968) and was N0 (log) = 7.78 +/- 0.61 - (0.995 x detention time). Ninety-five percent of N0 were within +/- 0.61 log of the standard curve. The enumeration method and sampling plan will be used in future studies to map changes in the number and distribution of Salmonella on carcasses of young chickens fed the DT104 strain used in standard curve development and subjected to different processing procedures.
Is the learning curve endless? One surgeon's experience with robotic prostatectomy
NASA Astrophysics Data System (ADS)
Patel, Vipul; Thaly, Rahul; Shah, Ketul
2007-02-01
Introduction: After performing 1,000 robotic prostatectomies we reflected back on our experience to determine what defined the learning curve and the essential elements that were the keys to surmounting it. Method: We retrospectively assessed our experience to attempt to define the learning curve(s), key elements of the procedure, technical refinements and changes in technology that facilitated our progress. Result: The initial learning curve to achieve basic competence and the ability to smoothly perform the procedure in less than 4 hours with acceptable outcomes was approximately 25 cases. A second learning curve was present between 75-100 cases as we approached more complicated patients. At 200 cases we were comfortably able to complete the procedure routinely in less than 2.5 hours with no specific step of the procedure hindering our progression. At 500 cases we had the introduction of new instrumentation (4th arm, biopolar Maryland, monopolar scissors) that changed our approach to the bladder neck and neurovascular bundle dissection. The most challenging part of the procedure was the bladder neck dissection. Conclusion: There is no single parameter that can be used to assess or define the learning curve. We used a combination of factors to make our subjective definition this included: operative time, smoothness of technical progression during the case along with clinical outcomes. The further our case experience progressed the more we expected of our outcomes, thus we continually modified our technique and hence embarked upon yet a new learning curve.
Physical fitness reference standards in fibromyalgia: The al-Ándalus project.
Álvarez-Gallardo, I C; Carbonell-Baeza, A; Segura-Jiménez, V; Soriano-Maldonado, A; Intemann, T; Aparicio, V A; Estévez-López, F; Camiletti-Moirón, D; Herrador-Colmenero, M; Ruiz, J R; Delgado-Fernández, M; Ortega, F B
2017-11-01
We aimed (1) to report age-specific physical fitness levels in people with fibromyalgia of a representative sample from Andalusia; and (2) to compare the fitness levels of people with fibromyalgia with non-fibromyalgia controls. This cross-sectional study included 468 (21 men) patients with fibromyalgia and 360 (55 men) controls. The fibromyalgia sample was geographically representative from southern Spain. Physical fitness was assessed with the Senior Fitness Test battery plus the handgrip test. We applied the Generalized Additive Model for Location, Scale and Shape to calculate percentile curves for women and fitted mean curves using a linear regression for men. Our results show that people with fibromyalgia reached worse performance in all fitness tests than controls (P < 0.001) in all age ranges (P < 0.001). This study provides a comprehensive description of age-specific physical fitness levels among patients with fibromyalgia and controls in a large sample of patients with fibromyalgia from southern of Spain. Physical fitness levels of people with fibromyalgia from Andalusia are very low in comparison with age-matched healthy controls. This information could be useful to correctly interpret physical fitness assessments and helping health care providers to identify individuals at risk for losing physical independence. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Nongaussian distribution curve of heterophorias among children.
Letourneau, J E; Giroux, R
1991-02-01
The purpose of this study was to measure the distribution curve of horizontal and vertical phorias among children. Kolmogorov-Smirnov goodness of fit tests showed that these distribution curves were not Gaussian among (N = 2048) 6- to 13-year-old children. The distribution curve of horizontal phoria at far and of vertical phorias at far and at near were leptokurtic; the distribution curve of horizontal phoria at near was platykurtic. No variation of the distribution curve of heterophorias with age was observed. Comparisons of any individual findings with the general distribution curve should take the nonGaussian distribution curve of heterophorias into account.
Ingerle, D.; Meirer, F.; Pepponi, G.; Demenev, E.; Giubertoni, D.; Wobrauschek, P.; Streli, C.
2014-01-01
The continuous downscaling of the process size for semiconductor devices pushes the junction depths and consequentially the implantation depths to the top few nanometers of the Si substrate. This motivates the need for sensitive methods capable of analyzing dopant distribution, total dose and possible impurities. X-ray techniques utilizing the external reflection of X-rays are very surface sensitive, hence providing a non-destructive tool for process analysis and control. X-ray reflectometry (XRR) is an established technique for the characterization of single- and multi-layered thin film structures with layer thicknesses in the nanometer range. XRR spectra are acquired by varying the incident angle in the grazing incidence regime while measuring the specular reflected X-ray beam. The shape of the resulting angle-dependent curve is correlated to changes of the electron density in the sample, but does not provide direct information on the presence or distribution of chemical elements in the sample. Grazing Incidence XRF (GIXRF) measures the X-ray fluorescence induced by an X-ray beam incident under grazing angles. The resulting angle dependent intensity curves are correlated to the depth distribution and mass density of the elements in the sample. GIXRF provides information on contaminations, total implanted dose and to some extent on the depth of the dopant distribution, but is ambiguous with regard to the exact distribution function. Both techniques use similar measurement procedures and data evaluation strategies, i.e. optimization of a sample model by fitting measured and calculated angle curves. Moreover, the applied sample models can be derived from the same physical properties, like atomic scattering/form factors and elemental concentrations; a simultaneous analysis is therefore a straightforward approach. This combined analysis in turn reduces the uncertainties of the individual techniques, allowing a determination of dose and depth profile of the implanted elements with drastically increased confidence level. Silicon wafers implanted with Arsenic at different implantation energies were measured by XRR and GIXRF using a combined, simultaneous measurement and data evaluation procedure. The data were processed using a self-developed software package (JGIXA), designed for simultaneous fitting of GIXRF and XRR data. The results were compared with depth profiles obtained by Secondary Ion Mass Spectrometry (SIMS). PMID:25202165
NASA Astrophysics Data System (ADS)
Su, Ray Kai Leung; Lee, Chien-Liang
2013-06-01
This study presents a seismic fragility analysis and ultimate spectral displacement assessment of regular low-rise masonry infilled (MI) reinforced concrete (RC) buildings using a coefficient-based method. The coefficient-based method does not require a complicated finite element analysis; instead, it is a simplified procedure for assessing the spectral acceleration and displacement of buildings subjected to earthquakes. A regression analysis was first performed to obtain the best-fitting equations for the inter-story drift ratio (IDR) and period shift factor of low-rise MI RC buildings in response to the peak ground acceleration of earthquakes using published results obtained from shaking table tests. Both spectral acceleration- and spectral displacement-based fragility curves under various damage states (in terms of IDR) were then constructed using the coefficient-based method. Finally, the spectral displacements of low-rise MI RC buildings at the ultimate (or nearcollapse) state obtained from this paper and the literature were compared. The simulation results indicate that the fragility curves obtained from this study and other previous work correspond well. Furthermore, most of the spectral displacements of low-rise MI RC buildings at the ultimate state from the literature fall within the bounded spectral displacements predicted by the coefficient-based method.
NASA Astrophysics Data System (ADS)
Zhang, Min; Katsumata, Akitoshi; Muramatsu, Chisako; Hara, Takeshi; Suzuki, Hiroki; Fujita, Hiroshi
2014-03-01
Periodontal disease is a kind of typical dental diseases, which affects many adults. The presence of alveolar bone resorption, which can be observed from dental panoramic radiographs, is one of the most important signs of the progression of periodontal disease. Automatically evaluating alveolar-bone resorption is of important clinic meaning in dental radiology. The purpose of this study was to propose a novel system for automated alveolar-bone-resorption evaluation from digital dental panoramic radiographs for the first time. The proposed system enables visualization and quantitative evaluation of alveolar bone resorption degree surrounding the teeth. It has the following procedures: (1) pre-processing for a test image; (2) detection of tooth root apices with Gabor filter and curve fitting for the root apex line; (3) detection of features related with alveolar bone by using image phase congruency map and template matching and curving fitting for the alveolar line; (4) detection of occlusion line with selected Gabor filter; (5) finally, evaluation of the quantitative alveolar-bone-resorption degree in the area surrounding teeth by simply computing the average ratio of the height of the alveolar bone and the height of the teeth. The proposed scheme was applied to 30 patient cases of digital panoramic radiographs, with alveolar bone resorption of different stages. Our initial trial on these test cases indicates that the quantitative evaluation results are correlated with the alveolar-boneresorption degree, although the performance still needs further improvement. Therefore it has potential clinical practicability.
Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges
2009-02-13
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.
Human phase response curve to a 1 h pulse of bright white light
St Hilaire, Melissa A; Gooley, Joshua J; Khalsa, Sat Bir S; Kronauer, Richard E; Czeisler, Charles A; Lockley, Steven W
2012-01-01
The phase resetting response of the human circadian pacemaker to light depends on the timing of exposure and is described by a phase response curve (PRC). The current study aimed to construct a PRC for a 1 h exposure to bright white light (∼8000 lux) and to compare this PRC to a <3 lux dim background light PRC. These data were also compared to a previously completed 6.7 h bright white light PRC and a <15 lux dim background light PRC constructed under similar conditions. Participants were randomized for exposure to 1 h of either bright white light (n= 18) or <3 lux dim background light (n= 18) scheduled at 1 of 18 circadian phases. Participants completed constant routine (CR) procedures in dim light (<3 lux) before and after the light exposure to assess circadian phase. Phase shifts were calculated as the difference in timing of dim light melatonin onset (DLMO) during pre- and post-stimulus CRs. Exposure to 1 h of bright white light induced a Type 1 PRC with a fitted peak-to-trough amplitude of 2.20 h. No discernible PRC was observed in the <3 lux dim background light PRC. The fitted peak-to-trough amplitude of the 1 h bright light PRC was ∼40% of that for the 6.7 h PRC despite representing only 15% of the light exposure duration, consistent with previous studies showing a non-linear duration–response function for the effects of light on circadian resetting. PMID:22547633
Human phase response curve to a 1 h pulse of bright white light.
St Hilaire, Melissa A; Gooley, Joshua J; Khalsa, Sat Bir S; Kronauer, Richard E; Czeisler, Charles A; Lockley, Steven W
2012-07-01
The phase resetting response of the human circadian pacemaker to light depends on the timing of exposure and is described by a phase response curve (PRC). The current study aimed to construct a PRC for a 1 h exposure to bright white light (∼8000 lux) and to compare this PRC to a <3 lux dim background light PRC. These data were also compared to a previously completed 6.7 h bright white light PRC and a <15 lux dim background light PRC constructed under similar conditions. Participants were randomized for exposure to 1 h of either bright white light (n=18) or <3 lux dim background light (n=18) scheduled at 1 of 18 circadian phases. Participants completed constant routine (CR) procedures in dim light (<3 lux) before and after the light exposure to assess circadian phase. Phase shifts were calculated as the difference in timing of dim light melatonin onset (DLMO) during pre- and post-stimulus CRs. Exposure to 1 h of bright white light induced a Type 1 PRC with a fitted peak-to-trough amplitude of 2.20 h. No discernible PRC was observed in the <3 lux dim background light PRC. The fitted peak-to-trough amplitude of the 1 h bright light PRC was ∼40% of that for the 6.7 h PRC despite representing only 15% of the light exposure duration, consistent with previous studies showing a non-linear duration–response function for the effects of light on circadian resetting.
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2012 CFR
2012-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2014 CFR
2014-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2013 CFR
2013-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2010 CFR
2010-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2011 CFR
2011-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
UTM: Universal Transit Modeller
NASA Astrophysics Data System (ADS)
Deeg, Hans J.
2014-12-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
Sasaki, Miho; Sumi, Misa; Eida, Sato; Katayama, Ikuo; Hotokezaka, Yuka; Nakamura, Takashi
2014-01-01
Intravoxel incoherent motion (IVIM) imaging can characterize diffusion and perfusion of normal and diseased tissues, and IVIM parameters are authentically determined by using cumbersome least-squares method. We evaluated a simple technique for the determination of IVIM parameters using geometric analysis of the multiexponential signal decay curve as an alternative to the least-squares method for the diagnosis of head and neck tumors. Pure diffusion coefficients (D), microvascular volume fraction (f), perfusion-related incoherent microcirculation (D*), and perfusion parameter that is heavily weighted towards extravascular space (P) were determined geometrically (Geo D, Geo f, and Geo P) or by least-squares method (Fit D, Fit f, and Fit D*) in normal structures and 105 head and neck tumors. The IVIM parameters were compared for their levels and diagnostic abilities between the 2 techniques. The IVIM parameters were not able to determine in 14 tumors with the least-squares method alone and in 4 tumors with the geometric and least-squares methods. The geometric IVIM values were significantly different (p<0.001) from Fit values (+2±4% and −7±24% for D and f values, respectively). Geo D and Fit D differentiated between lymphomas and SCCs with similar efficacy (78% and 80% accuracy, respectively). Stepwise approaches using combinations of Geo D and Geo P, Geo D and Geo f, or Fit D and Fit D* differentiated between pleomorphic adenomas, Warthin tumors, and malignant salivary gland tumors with the same efficacy (91% accuracy = 21/23). However, a stepwise differentiation using Fit D and Fit f was less effective (83% accuracy = 19/23). Considering cumbersome procedures with the least squares method compared with the geometric method, we concluded that the geometric determination of IVIM parameters can be an alternative to least-squares method in the diagnosis of head and neck tumors. PMID:25402436
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Computer modeling the fatigue crack growth rate behavior of metals in corrosive environments
NASA Technical Reports Server (NTRS)
Richey, Edward, III; Wilson, Allen W.; Pope, Jonathan M.; Gangloff, Richard P.
1994-01-01
The objective of this task was to develop a method to digitize FCP (fatigue crack propagation) kinetics data, generally presented in terms of extensive da/dN-Delta K pairs, to produce a file for subsequent linear superposition or curve-fitting analysis. The method that was developed is specific to the Numonics 2400 Digitablet and is comparable to commercially available software products as Digimatic(sup TM 4). Experiments demonstrated that the errors introduced by the photocopying of literature data, and digitization, are small compared to those inherent in laboratory methods to characterize FCP in benign and aggressive environments. The digitizing procedure was employed to obtain fifteen crack growth rate data sets for several aerospace alloys in aggressive environments.
NASA Technical Reports Server (NTRS)
Marconi, F.; Salas, M.; Yaeger, L.
1976-01-01
A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
NASA Technical Reports Server (NTRS)
Marconi, F.; Yaeger, L.
1976-01-01
A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
Numerical solution of Space Shuttle Orbiter flow field including real gas effects
NASA Technical Reports Server (NTRS)
Prabhu, D. K.; Tannehill, J. C.
1984-01-01
The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.
Cai, Jing; Li, Shan; Zhang, Haixin; Zhang, Shuoxin; Tyree, Melvin T
2014-01-01
Vulnerability curves (VCs) generally can be fitted to the Weibull equation; however, a growing number of VCs appear to be recalcitrant, that is, deviate from a Weibull but seem to fit dual Weibull curves. We hypothesize that dual Weibull curves in Hippophae rhamnoides L. are due to different vessel diameter classes, inter-vessel hydraulic connections or vessels versus fibre tracheids. We used dye staining techniques, hydraulic measurements and quantitative anatomy measurements to test these hypotheses. The fibres contribute 1.3% of the total stem conductivity, which eliminates the hypothesis that fibre tracheids account for the second Weibull curve. Nevertheless, the staining pattern of vessels and fibre tracheids suggested that fibres might function as a hydraulic bridge between adjacent vessels. We also argue that fibre bridges are safer than vessel-to-vessel pits and put forward the concept as a new paradigm. Hence, we tentatively propose that the first Weibull curve may be accounted by vessels connected to each other directly by pit fields, while the second Weibull curve is associated with vessels that are connected almost exclusively by fibre bridges. Further research is needed to test the concept of fibre bridge safety in species that have recalcitrant or normal Weibull curves. © 2013 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tao; Li, Cheng; Huang, Can
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
NASA Astrophysics Data System (ADS)
Cai, Gaoshen; Wu, Chuanyu; Gao, Zepu; Lang, Lihui; Alexandrov, Sergei
2018-05-01
An elliptical warm/hot sheet bulging test under different temperatures and pressure rates was carried out to predict Al-alloy sheet forming limit during warm/hot sheet hydroforming. Using relevant formulas of ultimate strain to calculate and dispose experimental data, forming limit curves (FLCS) in tension-tension state of strain (TTSS) area are obtained. Combining with the basic experimental data obtained by uniaxial tensile test under the equivalent condition with bulging test, complete forming limit diagrams (FLDS) of Al-alloy are established. Using a quadratic polynomial curve fitting method, material constants of fitting function are calculated and a prediction model equation for sheet metal forming limit is established, by which the corresponding forming limit curves in TTSS area can be obtained. The bulging test and fitting results indicated that the sheet metal FLCS obtained were very accurate. Also, the model equation can be used to instruct warm/hot sheet bulging test.
Ding, Tao; Li, Cheng; Huang, Can; ...
2017-01-09
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Yang, Xiaolan; Hu, Xiaolei; Xu, Bangtian; Wang, Xin; Qin, Jialin; He, Chenxiong; Xie, Yanling; Li, Yuanli; Liu, Lin; Liao, Fei
2014-06-17
A fluorometric titration approach was proposed for the calibration of the quantity of monoclonal antibody (mcAb) via the quench of fluorescence of tryptophan residues. It applied to purified mcAbs recognizing tryptophan-deficient epitopes, haptens nonfluorescent at 340 nm under the excitation at 280 nm, or fluorescent haptens bearing excitation valleys nearby 280 nm and excitation peaks nearby 340 nm to serve as Förster-resonance-energy-transfer (FRET) acceptors of tryptophan. Titration probes were epitopes/haptens themselves or conjugates of nonfluorescent haptens or tryptophan-deficient epitopes with FRET acceptors of tryptophan. Under the excitation at 280 nm, titration curves were recorded as fluorescence specific for the FRET acceptors or for mcAbs at 340 nm. To quantify the binding site of a mcAb, a universal model considering both static and dynamic quench by either type of probes was proposed for fitting to the titration curve. This was easy for fitting to fluorescence specific for the FRET acceptors but encountered nonconvergence for fitting to fluorescence of mcAbs at 340 nm. As a solution, (a) the maximum of the absolute values of first-order derivatives of a titration curve as fluorescence at 340 nm was estimated from the best-fit model for a probe level of zero, and (b) molar quantity of the binding site of the mcAb was estimated via consecutive fitting to the same titration curve by utilizing such a maximum as an approximate of the slope for linear response of fluorescence at 340 nm to quantities of the mcAb. This fluorometric titration approach was proved effective with one mcAb for six-histidine and another for penicillin G.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
Motulsky, Harvey J; Brown, Ronald E
2006-01-01
Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949
Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad
2018-03-24
The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.
Are minimally invasive procedures harder to acquire than conventional surgical procedures?
Hiemstra, Ellen; Kolkman, Wendela; le Cessie, Saskia; Jansen, Frank Willem
2011-01-01
It is frequently suggested that minimally invasive surgery (MIS) is harder to acquire than conventional surgery. To test this hypothesis, residents' learning curves of both surgical skills are compared. Residents had to be assessed using a general global rating scale of the OSATS (Objective Structured Assessment of Technical Skills) for every procedure they performed as primary surgeon during a 3-month clinical rotation in gynecological surgery. Nine postgraduate-year-4 residents collected a total of 319 OSATS during the 2 years and 3 months investigation period. These assessments concerned 129 MIS (laparoscopic and hysteroscopic) and 190 conventional (open abdominal and vaginal) procedures. Learning curves (in this study defined as OSATS score plotted against procedure-specific caseload) for MIS and conventional surgery were compared using a linear mixed model. The MIS curve revealed to be steeper than the conventional curve (1.77 vs. 0.75 OSATS points per assessed procedure; 95% CI 1.19-2.35 vs. 0.15-1.35, p < 0.01). Basic MIS procedures do not seem harder to acquire during residency than conventional surgical procedures. This may have resulted from the incorporation of structured MIS training programs in residency. Hopefully, this will lead to a more successful implementation of the advanced MIS procedures. Copyright © 2010 S. Karger AG, Basel.
Park, Yung; Ha, Joong Won; Lee, Yun Tae; Sung, Na Young
2014-06-01
Multiple studies have reported favorable short-term results after treatment of spondylolisthesis and other degenerative lumbar diseases with minimally invasive transforaminal lumbar interbody fusion. However, to our knowledge, results at a minimum of 5 years have not been reported. We determined (1) changes to the Oswestry Disability Index, (2) frequency of radiographic fusion, (3) complications and reoperations, and (4) the learning curve associated with minimally invasive transforaminal lumbar interbody fusion at minimum 5-year followup. We reviewed our first 124 patients who underwent minimally invasive transforaminal lumbar interbody fusion to treat low-grade spondylolisthesis and degenerative lumbar diseases and did not need a major deformity correction. This represented 63% (124 of 198) of the transforaminal lumbar interbody fusion procedures we performed for those indications during the study period (2003-2007). Eighty-three (67%) patients had complete 5-year followup. Plain radiographs and CT scans were evaluated by two reviewers. Trends of surgical time, blood loss, and hospital stay over time were examined by logarithmic curve fit-regression analysis to evaluate the learning curve. At 5 years, mean Oswestry Disability Index improved from 60 points preoperatively to 24 points and 79 of 83 patients (95%) had improvement of greater than 10 points. At 5 years, 67 of 83 (81%) achieved radiographic fusion, including 64 of 72 patients (89%) who had single-level surgery. Perioperative complications occurred in 11 of 124 patients (9%), and another surgical procedure was performed in eight of 124 patients (6.5%) involving the index level and seven of 124 patients (5.6%) at adjacent levels. There were slowly decreasing trends of surgical time and hospital stay only in single-level surgery and almost no change in intraoperative blood loss over time, suggesting a challenging learning curve. Oswestry Disability Index scores improved for patients with spondylolisthesis and degenerative lumbar diseases treated with minimally invasive transforaminal lumbar interbody fusion at minimum 5-year followup. We suggest this procedure is reasonable for properly selected patients with these indications; however, traditional approaches should still be performed for patients with high-grade spondylolisthesis, patients with a severely collapsed disc space and no motion seen on the dynamic radiographs, patients who need multilevel decompression and arthrodesis, and patients with kyphoscoliosis needing correction. Level IV, therapeutic study. See the Instructions for Authors for a complete description of levels of evidence.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
Mizuno, Ju; Mohri, Satoshi; Yokoyama, Takeshi; Otsuji, Mikiya; Arita, Hideko; Hanaoka, Kazuo
2017-02-01
Varying temperature affects cardiac systolic and diastolic function and the left ventricular (LV) pressure-time curve (PTC) waveform that includes information about LV inotropism and lusitropism. Our proposed half-logistic (h-L) time constants obtained by fitting using h-L functions for four segmental phases (Phases I-IV) in the isovolumic LV PTC are more useful indices for estimating LV inotropism and lusitropism during contraction and relaxation periods than the mono-exponential (m-E) time constants at normal temperature. In this study, we investigated whether the superiority of the goodness of h-L fits remained even at hypothermia and hyperthermia. Phases I-IV in the isovolumic LV PTCs in eight excised, cross-circulated canine hearts at 33, 36, and 38 °C were analyzed using h-L and m-E functions and the least-squares method. The h-L and m-E time constants for Phases I-IV significantly shortened with increasing temperature. Curve fitting using h-L functions was significantly better than that using m-E functions for Phases I-IV at all temperatures. Therefore, the superiority of the goodness of h-L fit vs. m-E fit remained at all temperatures. As LV inotropic and lusitropic indices, temperature-dependent h-L time constants could be more useful than m-E time constants for Phases I-IV.
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
ERIC Educational Resources Information Center
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G
2018-03-01
Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
Doherty, Patrick; Welch, Arthur; Tharpe, Jason; Moore, Camille; Ferry, Chris
2017-05-30
Studies have shown that a significant learning curve may be associated with adopting minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) with bilateral pedicle screw fixation (BPSF). Accordingly, several hybrid TLIF techniques have been proposed as surrogates to the accepted BPSF technique, asserting that less/fewer fixation(s) or less disruptive fixation may decrease the learning curve while still maintaining the minimally disruptive benefits. TLIF with interspinous process fixation (ISPF) is one such surrogate procedure. However, despite perceived ease of adaptability given the favorable proximity of the spinous processes, no evidence exists demonstrating whether or not the technique may possess its own inherent learning curve. The purpose of this study was to determine whether an intraoperative learning curve for one- and two-level TLIF + ISPF may exist for a single lead surgeon. Seventy-four consecutive patients who received one- or two-Level TLIF with rigid ISPF by a single lead surgeon were retrospectively reviewed. It was the first TLIF + ISPF case series for the lead surgeon. Intraoperative blood loss (EBL), hospitalization length-of-stay (LOS), fluoroscopy time, and postoperative complications were collected. EBL, LOS, and fluoroscopy time were modeled as a function of case number using multiple linear regression methods. A change point was included in each model to allow the trajectory of the outcomes to change during the duration of the case series. These change points were determined using profile likelihood methods. Models were fit using the maximum likelihood estimates for the change points. Age, sex, body mass index (BMI), and the number of treated levels were included as covariates. EBL, LOS, and fluoroscopy time did not significantly differ by age, sex, or BMI (p ≥ 0.12). Only EBL differed significantly by the number of levels (p = 0.026). The case number was not a significant predictor of EBL, LOS, or fluoroscopy time (p ≥ 0.21). At the time of data collection (mean time from surgery: 13.3 months), six patients had undergone revision due to interbody migration. No ISPF device complications were observed. Study outcomes support the ideal that TLIF + ISPF can be a readily adopted procedure without a significant intraoperative learning curve. However, the authors emphasize that further assessment of long-term healing outcomes is essential in fully characterizing both the efficacy and the indication learning curve for the TLIF + ISPF technique.
An Investigation of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee
2009-01-01
The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…
An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests
ERIC Educational Resources Information Center
Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.
2013-01-01
Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Coral-Ghanem, Cleusa; Alves, Milton Ruiz
2008-01-01
To evaluate the clinical performance of Monocurve and Bicurve (Soper-McGuire design) rigid gas-permeable contact lens fitting in patients with keratoconus. A prospective and randomized comparative clinical trial was conducted with a minimum follow-up of six months in two groups of 63 patients. One group was fitted with Monocurve contact lenses and the other with Bicurve Soper-McGuire design. Study variables included fluoresceinic pattern of lens-to-cornea fitting relationship, location and morphology of the cone, presence and degree of punctate keratitis and other corneal surface alterations, topographic changes, visual acuity for distance corrected with contact lenses and survival analysis for remaining with the same contact lens design during the study. During the follow-up there was a decrease in the number of eyes with advanced and central cones fitted with Monocurve lenses, and an increase in those fitted with Soper-McGuire design. In the Monocurve group, a flattening of both the steepest and the flattest keratometric curve was observed. In the Soper-McGuire group, a steepening of the flattest keratometric curve and a flattening of the steepest keratometric curve were observed. There was a decrease in best-corrected visual acuity with contact lens in the Monocurve group. Survival analysis for the Monocurve lens was 60.32% and for the Soper-McGuire was 71.43% at a mean follow-up of six months. This study showed that due to the changes observed in corneal topography, the same contact lens design did not provide an ideal fitting for all patients during the follow-up period. The Soper-McGuire lenses had a better performance than the Monocurve lenses in advanced and central keratoconus.
3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.
Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef
2016-11-01
In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.
Procedures for setting curve advisory speed.
DOT National Transportation Integrated Search
2009-08-01
The procedures described in this handbook are intended to improve consistency in curve signing and driver compliance with the advisory speed. The handbook describes guidelines for determining when an advisory speed is needed, criteria for identifying...
Region-of-interest determination and bit-rate conversion for H.264 video transcoding
NASA Astrophysics Data System (ADS)
Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan
2013-12-01
This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.
A pulsed injection parahydrogen generator and techniques for quantifying enrichment.
Feng, Bibo; Coffey, Aaron M; Colon, Raul D; Chekmenev, Eduard Y; Waddell, Kevin W
2012-01-01
A device is presented for efficiently enriching parahydrogen by pulsed injection of ambient hydrogen gas. Hydrogen input to the generator is pulsed at high pressure to a catalyst chamber making thermal contact with the cold head of a closed-cycle cryocooler maintained between 15 and 20K. The system enables fast production (0.9 standard liters per minute) and allows for a wide range of production targets. Production rates can be systematically adjusted by varying the actuation sequence of high-pressure solenoid valves, which are controlled via an open source microcontroller to sample all combinations between fast and thorough enrichment by varying duration of hydrogen contact in the catalyst chamber. The entire enrichment cycle from optimization to quantification and storage kinetics are also described. Conversion of the para spin-isomer to orthohydrogen in borosilicate tubes was measured at 8 min intervals over a period of 64 h with a 12 T NMR spectrometer. These relaxation curves were then used to extract initial enrichment by exploiting the known equilibrium (relaxed) distribution of spin isomers with linear least squares fitting to a single exponential decay curve with an estimated error less than or equal to 1%. This procedure is time-consuming, but requires only one sample pressurized to atmosphere. Given that tedious matching to external references are unnecessary with this procedure, we find it to be useful for periodic inspection of generator performance. The equipment and procedures offer a variation in generator design that eliminate the need to meter flow while enabling access to increased rates of production. These tools for enriching and quantifying parahydrogen have been in steady use for 3 years and should be helpful as a template or as reference material for building and operating a parahydrogen production facility. Copyright © 2011 Elsevier Inc. All rights reserved.
A Pulsed Injection Parahydrogen Generator and Techniques for Quantifying Enrichment
Feng, Bibo; Coffey, Aaron M.; Colon, Raul D.; Chekmenev, Eduard Y.; Waddell, Kevin W.
2012-01-01
A device is presented for efficiently enriching parahydrogen by pulsed injection of ambient hydrogen gas. Hydrogen input to the generator is pulsed at high pressure to a catalyst chamber making thermal contact with the cold head of a closed cycle cryostat maintained between 15 and 20 K. The system enables fast production (0.9 standard liters per minute) and allows for a wide range of production targets. Production rates can be systematically adjusted by varying the actuation sequence of high-pressure solenoid valves, which are controlled via an open source microcontroller to sample all combinations between fast and thorough enrichment by varying duration of hydrogen contact in the catalyst chamber. The entire enrichment cycle from optimization to quantification and storage kinetics are also described. Conversion of the para spin-isomer to orthohydrogen in borosilicate tubes was measured at 8 minute intervals over a period of 64 hours with a 12 Tesla NMR spectrometer. These relaxation curves were then used to extract initial enrichment by exploiting the known equilibrium (relaxed) distribution of spin isomers with linear least squares fitting to a single exponential decay curve with an estimated error less than or equal to 1 %. This procedure is time-consuming, but requires only one sample pressurized to atmosphere. Given that tedious matching to external references are unnecessary with this procedure, we find it to be useful for periodic inspection of generator performance. The equipment and procedures offer a variation in generator design that eliminate the need to meter flow while enabling access to increased rates of production. These tools for enriching and quantifying parahydrogen have been in steady use for 3 years and should be helpful as a template or as reference material for building and operating a parahydrogen production facility. PMID:22188975
Design data for radars based on 13.9 GHz Skylab scattering coefficient measurements
NASA Technical Reports Server (NTRS)
Moore, R. K. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Measurements made at 13.9 GHz with the radar scatterometer on Skylab have been combined to produce median curves of the variation of scattering coefficient with angle of incidence out to 45 deg. Because of the large number of observations, and the large area averaged for each measured data point, these curves may be used as a new design base for radars. A reasonably good fit at larger angles is obtained using the theoretical expression based on an exponential height correlation function and also using Lambert's law. For angles under 10 deg, a different fit based on the exponential correlation function, and a fit based on geometric optics expressions are both reasonably valid.
Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.
VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T
2017-06-01
The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.
A New Approach to the Internal Calibration of Reverberation-Mapping Spectra
NASA Astrophysics Data System (ADS)
Fausnaugh, M. M.
2017-02-01
We present a new procedure for the internal (night-to-night) calibration of timeseries spectra, with specific applications to optical AGN reverberation mapping data. The traditional calibration technique assumes that the narrow [O iii] λ5007 emission-line profile is constant in time; given a reference [O iii] λ5007 line profile, nightly spectra are aligned by fitting for a wavelength shift, a flux rescaling factor, and a change in the spectroscopic resolution. We propose the following modifications to this procedure: (1) we stipulate a constant spectral resolution for the final calibrated spectra, (2) we employ a more flexible model for changes in the spectral resolution, and (3) we use a Bayesian modeling framework to assess uncertainties in the calibration. In a test case using data for MCG+08-11-011, these modifications result in a calibration precision of ˜1 millimagnitude, which is approximately a factor of five improvement over the traditional technique. At this level, other systematic issues (e.g., the nightly sensitivity functions and Feii contamination) limit the final precision of the observed light curves. We implement this procedure as a python package (mapspec), which we make available to the community.
Computational study of Ca, Sr and Ba under pressure
NASA Astrophysics Data System (ADS)
Jona, F.; Marcus, P. M.
2006-05-01
A first-principles procedure for the calculation of equilibrium properties of crystals under hydrostatic pressure is applied to Ca, Sr and Ba. The procedure is based on minimizing the Gibbs free energy G (at zero temperature) with respect to the structure at a given pressure p, and hence does not require the equation of state to fix the pressure. The calculated lattice constants of Ca, Sr and Ba are shown to be generally closer to measured values than previous calculations using other procedures. In particular for Ba, where careful and extensive pressure data are available, the calculated lattice parameters fit measurements to about 1% in three different phases, both cubic and hexagonal. Rigid-lattice transition pressures between phases which come directly from the crossing of G(p) curves are not close to measured transition pressures. One reason is the need to include zero-point energy (ZPE) of vibration in G. The ZPE of cubic phases is calculated with a generalized Debye approximation and applied to Ca and Sr, where it produces significant shifts in transition pressures. An extensive tabulation is given of structural parameters and elastic constants from the literature, including both theoretical and experimental results.
Compound windows of the Hénon-map
NASA Astrophysics Data System (ADS)
Lorenz, Edward N.
2008-08-01
For the two-parameter second-order Hénon map, the shapes and locations of the periodic windows-continua of parameter values for which solutions x0,x1,… can be stably periodic, embedded in larger regions where chaotic solutions or solutions of other periods prevail-are found by a random searching procedure and displayed graphically. Many windows have a typical shape, consisting of a central “body” from which four narrow “antennae” extend. Such windows, to be called compound windows, are often arranged in bands, to be called window streets, that are made up largely of small detected but poorly resolved compound windows. For each fundamental subwindow-the portion of a window where a fundamental period prevails-a stability measure U is introduced; where the solution is stable, |U|<1. Curves of constant U are found by numerical integration. Along one line in parameter space the Hénon-map reduces to the one-parameter first-order logistic map, and two antennae from each compound window intersect this line. The curves where U=1 and U=-1 that bound either antenna are close together within these intersections, but, as either curve with U=-1 leaves the line, it diverges from the curve where U=1, crosses the other curve where U=-1, and nears the other curve where U=1, forming another antenna. The region bounded by the numerically determined curves coincides with the subwindow as found by random searching. A fourth-degree equation for an idealized curve of constant U is established. Points in parameter space producing periodic solutions where x0=xm=0, for given values of m, are found to lie on Cantor sets of curves that closely fit the window streets. Points producing solutions where x0=xm=0 and satisfying a third condition, approximating the condition that xn be bounded as n→-∞, lie on curves, to be called street curves of order m, that approximate individual members of the Cantor set and individual window streets. Compound windows of period m+m‧ tend to occur near the intersections of street curves of orders m and m‧. Some exceptions to what appear to be fairly general results are noted. The exceptions render it difficult to establish general theorems.
Water in the atmosphere of HD 209458b from 3.6-8 μm IRAC photometric observations in primary transit
NASA Astrophysics Data System (ADS)
Beaulieu, J. P.; Kipping, D. M.; Batista, V.; Tinetti, G.; Ribas, I.; Carey, S.; Noriega-Crespo, J. A.; Griffith, C. A.; Campanella, G.; Dong, S.; Tennyson, J.; Barber, R. J.; Deroo, P.; Fossey, S. J.; Liang, D.; Swain, M. R.; Yung, Y.; Allard, N.
2010-12-01
The hot Jupiter HD 209458b was observed during primary transit at 3.6, 4.5, 5.8 and 8.0 μm using the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. We describe the procedures we adopted to correct for the systematic effects present in the IRAC data and the subsequent analysis. The light curves were fitted including limb-darkening effects and fitted using Markov Chain Monte Carlo and prayer-bead Monte Carlo techniques, obtaining almost identical results. The final depth measurements obtained by a combined Markov Chain Monte Carlo fit are at 3.6 μm, 1.469 ± 0.013 and 1.448 ± 0.013 per cent; at 4.5 μm, 1.478 ± 0.017 per cent; at 5.8 μm, 1.549 ± 0.015 per cent; and at 8.0 μm, 1.535 ± 0.011 per cent. Our results clearly indicate the presence of water in the planetary atmosphere. Our broad-band photometric measurements with IRAC prevent us from determining the additional presence of other molecules such as CO, CO2 and methane for which spectroscopy is needed. While water vapour with a mixing ratio of ? combined with thermal profiles retrieved from the day side may provide a very good fit to our observations, this data set alone is unable to resolve completely the degeneracy between water abundance and atmospheric thermal profile.
Outburst-related period changes of recurrent nova CI aquilae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, R. E.; Honeycutt, R. K., E-mail: honey@astro.indiana.edu, E-mail: rewilson@ufl.edu
2014-11-01
Pre-outburst and post-outburst light curves and post-outburst eclipse timings are analyzed to measure any period (P) change related to nova CI Aql's outburst of early 2000 and a mean post-outburst dP/dt, which then lead to estimates of the accreting component's rate of mass (M) change and its overall outburst-related change of mass over roughly a decade of observations. We apply a recently developed procedure for unified analysis of three timing-related data types (light curves, radial velocities, and eclipse timings), although with only light curves and timings in this case. Fits to the data are reasonably good without need for amore » disk in the light-curve model, although the disk certainly exists and has an important role in our post-outburst mass flow computations. Initial experiments showed that, although there seems to be an accretion hot spot, it has essentially no effect on derived outburst-related ΔP or on post-outburst dP/dt. Use of atomic time (HJED) in place of HJD also has essentially nil effect on ΔP and dP/dt. We find ΔP consistently negative in various types of solutions, although at best only marginally significant statistically in any one experiment. Pre-outburst HJD {sub 0} and P results are given, as are post-outburst HJD {sub 0}, P, and dP/dt, with light curves and eclipse times as joint input, and also with only eclipse time input. Post-outburst dP/dt is negative at about 2.4σ. Explicit formulae for mass transfer rates and epoch-to-epoch mass change are developed and applied. A known offset in the magnitude zero point for 1991-1994 is corrected.« less
Han, Hyung Joon; Choi, Sae Byeol; Park, Man Sik; Lee, Jin Suk; Kim, Wan Bae; Song, Tae Jin; Choi, Sang Yong
2011-07-01
Single port laparoscopic surgery has come to the forefront of minimally invasive surgery. For those familiar with conventional techniques, however, this type of operation demands a different type of eye/hand coordination and involves unfamiliar working instruments. Herein, the authors describe the learning curve and the clinical outcomes of single port laparoscopic cholecystectomy for 150 consecutive patients with benign gallbladder disease. All patients underwent single port laparoscopic cholecystectomy using a homemade glove port by one of five operators with different levels of experiences of laparoscopic surgery. The learning curve for each operator was fitted using the non-linear ordinary least squares method based on a non-linear regression model. Mean operating time was 77.6 ± 28.5 min. Fourteen patients (6.0%) were converted to conventional laparoscopic cholecystectomy. Complications occurred in 15 patients (10.0%), as follows: bile duct injury (n = 2), surgical site infection (n = 8), seroma (n = 2), and wound pain (n = 3). One operator achieved a learning curve plateau at 61.4 min per procedure after 8.5 cases and his time improved by 95.3 min as compared with initial operation time. Younger surgeons showed significant decreases in mean operation time and achieved stable mean operation times. In particular, younger surgeons showed significant decreases in operation times after 20 cases. Experienced laparoscopic surgeons can safely perform single port laparoscopic cholecystectomy using conventional or angled laparoscopic instruments. The present study shows that an operator can overcome the single port laparoscopic cholecystectomy learning curve in about eight cases.
Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies
NASA Astrophysics Data System (ADS)
Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.
2017-12-01
Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark matter models. We find that the pure baryonic model with fitted M/L ratios falls within the 1σ confidence level for 10 HSB and 2 LSB galaxies, at the price of growing the M/Ls on average by a factor of two, but the fits are inferior compared to the best-fitting dark matter model.
Three-dimensional trend mapping from wire-line logs
Doveton, J.H.; Ke-an, Z.
1985-01-01
Mapping of lithofacies and porosities of stratigraphic units is complicated because these properties vary in three dimensions. The method of moments was proposed by Krumbein and Libby (1957) as a technique to aid in resolving this problem. Moments are easily computed from wireline logs and are simple statistics which summarize vertical variation in a log trace. Combinations of moment maps have proved useful in understanding vertical and lateral changes in lithology of sedimentary rock units. Although moments have meaning both as statistical descriptors and as mechanical properties, they also define polynomial curves which approximate lithologic changes as a function of depth. These polynomials can be fitted by least-squares methods, partitioning major trends in rock properties from finescale fluctuations. Analysis of variance yields the degree of fit of any polynomial and measures the proportion of vertical variability expressed by any moment or combination of moments. In addition, polynomial curves can be differentiated to determine depths at which pronounced expressions of facies occur and to determine the locations of boundaries between major lithologic subdivisions. Moments can be estimated at any location in an area by interpolating from log moments at control wells. A matrix algebra operation then converts moment estimates to coefficients of a polynomial function which describes a continuous curve of lithologic variation with depth. If this procedure is applied to a grid of geographic locations, the result is a model of variability in three dimensions. Resolution of the model is determined largely by number of moments used in its generation. The method is illustrated with an analysis of lithofacies in the Simpson Group of south-central Kansas; the three-dimensional model is shown as cross sections and slice maps. In this study, the gamma-ray log is used as a measure of shaliness of the unit. However, the method is general and can be applied, for example, to suites of neutron, density, or sonic logs to produce three-dimensional models of porosity in reservoir rocks. ?? 1985 Plenum Publishing Corporation.
Longo, M L; Vargas Junior, F M; Cansian, K; Souza, M R; Burim, P C; Silva, A L A; Costa, C M; Seno, L O
2018-04-14
The main objective of this research was to conduct an exploratory study of the lactation curve in order to characterize the productive potential of Pantaneiro ewes and lambs. Fifty ewes were bred using four rams in two different mating seasons. The ewes were kept with their lambs on pasture of Brachiaria brizantha. Ewe body score, ewe weight, and lamb weight were evaluated. Milk sampling was performed every week. In the morning for milk collections, the ewes were treated with 1 UI of oxytocin (intramuscular) for complete milking. Lambs were separated from the ewes for 4 h and milk collections were performed. The total milk production over 24 h was estimated by multiplying the production of this period (4 h) by 6. The data were analyzed using the MIXED procedure (P < 0.05) in SAS. Milk production data were fitted to the curve using the incomplete gamma function of Wood, and lamb growth data were fitted using the Gompertiz equation. The average milk production of the ewes was 1.03 kg/day -1 . Younger ewes had the lowest milk production (18 = 798 ± 330, 24 = 1001 ± 440, 36 = 1100 ± 490, and 48 = 1106 ± 490 g/day -1 ). Ewe body score at lambing affected initial milk production (1.0 = 816 ± 660, 1.5 = 1089 ± 105, and 2.0 = 1424 ± 1600 g/day -1 ). Lambs were weaned with an average weight of 20.3 kg. Daily weight gain from birth to weaning was 181 g. Locally adapted Pantaneiro ewes showed a linear decreasing lactation curve, with reduced production from the second week of lactation. Overall, evaluation of the dairy production and lamb performance revealed great variation, denoting potential for selection.
Day-night fluctuation of pulse oximetry: an exploratory study in pediatric inpatients.
Vargas, Mario H; Heyaime-Lalane, Jorge; Pérez-Rodríguez, Liliana; Zúñiga-Vázquez, Guillermo; Furuya, María Elena Y
2008-01-01
Pulse oximetry is a simple and non-invasive procedure widely used nowadays in the clinical practice. However, it is unclear if SpO2 values are constant throughout the 24 hours of the day or have periodic fluctuations. In the present study we evaluated if progressive day-night variations of SpO, values occur in children. Pulse oximetry (Nonin 2500) was carried out approximately every 2 hours during a 24-hours period in pediatric patients hospitalized due to different diseases but without acute or chronic respiratory diseases. Measurements were analyzed through the cosinor method (sinusoidal curve fitting). A total of 131 patients (23 days to 16 years old) were studied. A sinusoidal fitting of the SpO2 values was accomplished in 84.7% of children. According to these curves, maximal SpO2 values occurred in the late afternoon [4:53 PM (3:49-5:32 PM), median (quartile 1-quartile 3)], while minimal values appeared in the first hours of the day [3:06 AM (2:12-4:08 AM)]. This pattern was the same in sleeping or awake children. More than half of these sinusoidal curves had a period near to 24 hours (between 20 and 28 hours). An additional finding was that maximal and minimal SpO2 values diminished with age (approximately 0.15 and approximately 0.13% SpO2 per year, respectively). In children less than six years old 5th percentile of SpO2 values were 93.8% in the late afternoon and 89.8% in the early hours of the day, while corresponding figures for older children were 91.0% and 88.5%, respectively. Our results suggested that, regardless of the sleep influence, in most children the SpO2 follows a progressive fluctuation during a 24-hours cycle, a pattern which is suggestive of a circadian rhythm. A prospective study in healthy children is warranted.
Yamamura, S; Momose, Y
2001-01-16
A pattern-fitting procedure for quantitative analysis of crystalline pharmaceuticals in solid dosage forms using X-ray powder diffraction data is described. This method is based on a procedure for pattern-fitting in crystal structure refinement, and observed X-ray scattering intensities were fitted to analytical expressions including some fitting parameters, i.e. scale factor, peak positions, peak widths and degree of preferred orientation of the crystallites. All fitting parameters were optimized by the non-linear least-squares procedure. Then the weight fraction of each component was determined from the optimized scale factors. In the present study, well-crystallized binary systems, zinc oxide-zinc sulfide (ZnO-ZnS) and salicylic acid-benzoic acid (SA-BA), were used as the samples. In analysis of the ZnO-ZnS system, the weight fraction of ZnO or ZnS could be determined quantitatively in the range of 5-95% in the case of both powders and tablets. In analysis of the SA-BA systems, the weight fraction of SA or BA could be determined quantitatively in the range of 20-80% in the case of both powders and tablets. Quantitative analysis applying this pattern-fitting procedure showed better reproducibility than other X-ray methods based on the linear or integral intensities of particular diffraction peaks. Analysis using this pattern-fitting procedure also has the advantage that the preferred orientation of the crystallites in solid dosage forms can be also determined in the course of quantitative analysis.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Luthra, Suvitesh; Ramady, Omar; Monge, Mary; Fitzsimons, Michael G; Kaleta, Terry R; Sundt, Thoralf M
2015-06-01
Markers of operation room (OR) efficiency in cardiac surgery are focused on "knife to skin" and "start time tardiness." These do not evaluate the middle and later parts of the cardiac surgical pathway. The purpose of this analysis was to evaluate knife to skin time as an efficiency marker in cardiac surgery. We looked at knife to skin time, procedure time, and transfer times in the cardiac operational pathway for their correlation with predefined indices of operational efficiency (Index of Operation Efficiency - InOE, Surgical Index of Operational Efficiency - sInOE). A regression analysis was performed to test the goodness of fit of the regression curves estimated for InOE relative to the times on the operational pathway. The mean knife to skin time was 90.6 ± 13 minutes (23% of total OR time). The mean procedure time was 282 ± 123 minutes (71% of total OR time). Utilization efficiencies were highest for aortic valve replacement and coronary artery bypass grafting and least for complex aortic procedures. There were no significant procedure-specific or team-specific differences for standard procedures. Procedure times correlated the strongest with InOE (r = -0.98, p < 0.01). Compared to procedure times, knife to skin is not as strong an indicator of efficiency. A statistically significant linear dependence on InOE was observed with "procedure times" only. Procedure times are a better marker of OR efficiency than knife to skin in cardiac cases. Strategies to increase OR utilization and efficiency should address procedure times in addition to knife to skin times. © 2015 Wiley Periodicals, Inc.
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
Quantifying the cognitive cost of laparo-endoscopic single-site surgeries: Gaze-based indices.
Di Stasi, Leandro L; Díaz-Piedra, Carolina; Ruiz-Rabelo, Juan Francisco; Rieiro, Héctor; Sanchez Carrion, Jose M; Catena, Andrés
2017-11-01
Despite the growing interest concerning the laparo-endoscopic single-site surgery (LESS) procedure, LESS presents multiple difficulties and challenges that are likely to increase the surgeon's cognitive cost, in terms of both cognitive load and performance. Nevertheless, there is currently no objective index capable of assessing the surgeon cognitive cost while performing LESS. We assessed if gaze-based indices might offer unique and unbiased measures to quantify LESS complexity and its cognitive cost. We expect that the assessment of surgeon's cognitive cost to improve patient safety by measuring fitness-for-duty and reducing surgeons overload. Using a wearable eye tracker device, we measured gaze entropy and velocity of surgical trainees and attending surgeons during two surgical procedures (LESS vs. multiport laparoscopy surgery [MPS]). None of the participants had previous experience with LESS. They performed two exercises with different complexity levels (Low: Pattern Cut vs. High: Peg Transfer). We also collected performance and subjective data. LESS caused higher cognitive demand than MPS, as indicated by increased gaze entropy in both surgical trainees and attending surgeons (exploration pattern became more random). Furthermore, gaze velocity was higher (exploration pattern became more rapid) for the LESS procedure independently of the surgeon's expertise. Perceived task complexity and laparoscopic accuracy confirmed gaze-based results. Gaze-based indices have great potential as objective and non-intrusive measures to assess surgeons' cognitive cost and fitness-for-duty. Furthermore, gaze-based indices might play a relevant role in defining future guidelines on surgeons' examinations to mark their achievements during the entire training (e.g. analyzing surgical learning curves). Copyright © 2017 Elsevier Ltd. All rights reserved.
Multi-Filter Photometric Analysis of Three β Lyrae-type Eclipsing Binary Stars
NASA Astrophysics Data System (ADS)
Gardner, T.; Hahs, G.; Gokhale, V.
2015-12-01
We present light curve analysis of three variable stars, ASAS J105855+1722.2, NSVS 5066754, and NSVS 9091101. These objects are selected from a list of β- Lyrae candidates published by Hoffman et al. (2008). Light curves are generated using data collected at the the 31-inch NURO telescope at the Lowell Observatory in Flagstaff, Arizona in three filters: Bessell B, V, and R. Additional observations were made using the 14-inch Meade telescope at the Truman State Observatory in Kirksville, Missouri using Baader R, G, and B filters. In this paper, we present the light curves for these three objects and generate a truncated eight-term Fourier fit to these light curves. We use the Fourier coefficients from this fit to confirm ASAS J105855+1722.2 and NSVS 5066754 as β Lyrae type systems, and NSVS 9091101 to possibly be a RR Lyrae-type system. We measure the O'Connell effect observed in two of these systems (ASAS J105855+1722.2 and NSVS 5066754), and quantify this effect by calculating the "Light Curve Asymmetry" (LCA) and the "O'Connell Effect Ratio" (OER).
Half a billion surgical cases: Aligning surgical delivery with best-performing health systems.
Shrime, Mark G; Daniels, Kimberly M; Meara, John G
2015-07-01
Surgical delivery varies 200-fold across countries. No direct correlation exists, however, between surgical delivery and health outcomes, making it difficult to pinpoint a goal for surgical scale-up. This report determines the amount of surgery that would be delivered worldwide if the world aligned itself with countries providing the best health outcomes. Annual rates of surgical delivery have been published previously for 129 countries. Five health outcomes were plotted against reported surgical delivery. Univariate and multivariate polynomial regression curves were fit, and the optimal point on each regression curve was determined by solving for first-order conditions. The country closest to the optimum for each health outcome was taken as representative of the best-performing health system. Monetary inputs to and surgical procedures provided by these systems were scaled to the global population. For 3 of the 5 health outcomes, optima could be found. Globally, 315 million procedures currently are provided annually. If global delivery mirrored the 3 best-performing countries, between 360 million and 460 million cases would be provided annually. With population growth, this will increase to approximately half a billion cases by 2030. Health systems delivering these outcomes spend approximately 10% of their GDP on health. This is the first study to provide empirical evidence for the surgical output that an ideal health system would provide. Our results project ideal delivery worldwide of approximately 550 million annual surgical cases by 2030. Copyright © 2015 Elsevier Inc. All rights reserved.
Half a billion surgical cases: Aligning surgical delivery with best-performing health systems
Shrime, Mark G.; Daniels, Kimberly M.; Meara, John G.
2015-01-01
Background Surgical delivery varies 200-fold across countries. No direct correlation exists, however, between surgical delivery and health outcomes, making it difficult to pinpoint a goal for surgical scale-up. This report determines the amount of surgery that would be delivered worldwide if the world aligned itself with countries providing the best health outcomes. Methods Annual rates of surgical delivery have been published previously for 129 countries. Five health outcomes were plotted against reported surgical delivery. Univariate and multivariate polynomial regression curves were fit, and the optimal point on each regression curve was determined by solving for first-order conditions. The country closest to the optimum for each health outcome was taken as representative of the best-performing health system. Monetary inputs to and surgical procedures provided by these systems were scaled to the global population. Results For 3 of the 5 health outcomes, optima could be found. Globally, 315 million procedures currently are provided annually. If global delivery mirrored the 3 best-performing countries, between 360 million and 460 million cases would be provided annually. With population growth, this will increase to approximately half a billion cases by 2030. Health systems delivering these outcomes spend approximately 10% of their GDP on health. Conclusion This is the first study to provide empirical evidence for the surgical output that an ideal health system would provide. Our results project ideal delivery worldwide of approximately 550 million annual surgical cases by 2030. PMID:25934078
[The application of non-annealing thermoluminescent dosimetry (TLD)].
Wu, J M; Chen, C S; Lan, R H
1993-06-01
Conventional use of Thermoluminescence (TL) in radiation dosimetry is very time-consuming. It requires repeating the procedures of preheating and annealing. In an attempt to simplify these procedures, we conducted an experiment of non-annealing TL dosimetry. This article reports the experiment's results. We adopted Lithium Fluoride (LiF) chip (TLD-100) in polystyrene under the exposure of Co-60, and the result was taken by HAR-SHAW-4000 TL reading system. The TL response was analyzed, including linearity, reproducibility and fading test. Because non-annealing TL response was greatly influenced by residual electron, TLD calibration curves were separated into two parts: (1) high dose region (HDR, 50-1500 cGy); (2) low dose region (LDR, 0-50 cGy). When TL dosimeters were exposed to a single high does (about 500 cGy), the HDR could be reproduced within 3% and fit a good linearity. For LDR, we had to give up the tail of glow curve in the high temperature region. We could then get good linearity and reproducibility. Furthermore, fading of non-annealing was apparently larger than annealing. We could control the fading of non-annealing was apparently larger than annealing. We could control the fading influence within 1% by taking the TL reading one hour after exposure. On the other hand, a combination of photon and electron exposure was also performed by non-annealing TL dosimetry. The results were compatible with Co-60 exposure in the same system.
Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly
Yao, Jian; Levine, Judah; Weiss, Marc
2015-01-01
The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is “day boundary discontinuity,” which has been studied extensively and can be solved by multiple methods [1–8]. The other category of discontinuity, called “anomaly boundary discontinuity (anomaly-BD),” comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as “jamming”, can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer. PMID:26958451
Zhang, Yong; Green, Christopher T.; Baeumer, Boris
2014-01-01
Time-nonlocal transport models can describe non-Fickian diffusion observed in geological media, but the physical meaning of parameters can be ambiguous, and most applications are limited to curve-fitting. This study explores methods for predicting the parameters of a temporally tempered Lévy motion (TTLM) model for transient sub-diffusion in mobile–immobile like alluvial settings represented by high-resolution hydrofacies models. The TTLM model is a concise multi-rate mass transfer (MRMT) model that describes a linear mass transfer process where the transfer kinetics and late-time transport behavior are controlled by properties of the host medium, especially the immobile domain. The intrinsic connection between the MRMT and TTLM models helps to estimate the main time-nonlocal parameters in the TTLM model (which are the time scale index, the capacity coefficient, and the truncation parameter) either semi-analytically or empirically from the measurable aquifer properties. Further applications show that the TTLM model captures the observed solute snapshots, the breakthrough curves, and the spatial moments of plumes up to the fourth order. Most importantly, the a priori estimation of the time-nonlocal parameters outside of any breakthrough fitting procedure provides a reliable “blind” prediction of the late-time dynamics of subdiffusion observed in a spectrum of alluvial settings. Predictability of the time-nonlocal parameters may be due to the fact that the late-time subdiffusion is not affected by the exact location of each immobile zone, but rather is controlled by the time spent in immobile blocks surrounding the pathway of solute particles. Results also show that the effective dispersion coefficient has to be fitted due to the scale effect of transport, and the mean velocity can differ from local measurements or volume averages. The link between medium heterogeneity and time-nonlocal parameters will help to improve model predictability for non-Fickian transport in alluvial settings.
Evaluation of the swelling behaviour of iota-carrageenan in monolithic matrix tablets.
Kelemen, András; Buchholcz, Gyula; Sovány, Tamás; Pintye-Hódi, Klára
2015-08-10
The swelling properties of monolithic matrix tablets containing iota-carrageenan were studied at different pH values, with measurements of the swelling force and characterization of the profile of the swelling curve. The swelling force meter was linked to a PC by an RS232 cable and the measured data were evaluated with self-developed software. The monitor displayed the swelling force vs. time curve with the important parameters, which could be fitted with an Analysis menu. In the case of iota-carrageenan matrix tablets, it was concluded that the pH and the pressure did not influence the swelling process, and the first section of the swelling curve could be fitted by the Korsmeyer-Peppas equation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pang, Liping; Goltz, Mark; Close, Murray
2003-01-01
In this note, we applied the temporal moment solutions of [Das and Kluitenberg, 1996. Soil Sci. Am. J. 60, 1724] for one-dimensional advective-dispersive solute transport with linear equilibrium sorption and first-order degradation for time pulse sources to analyse soil column experimental data. Unlike most other moment solutions, these solutions consider the interplay of degradation and sorption. This permits estimation of a first-order degradation rate constant using the zeroth moment of column breakthrough data, as well as estimation of the retardation factor or sorption distribution coefficient of a degrading solute using the first moment. The method of temporal moment (MOM) formulae was applied to analyse breakthrough data from a laboratory column study of atrazine, hexazinone and rhodamine WT transport in volcanic pumice sand, as well as experimental data from the literature. Transport and degradation parameters obtained using the MOM were compared to parameters obtained by fitting breakthrough data from an advective-dispersive transport model with equilibrium sorption and first-order degradation, using the nonlinear least-square curve-fitting program CXTFIT. The results derived from using the literature data were also compared with estimates reported in the literature using different equilibrium models. The good agreement suggests that the MOM could provide an additional useful means of parameter estimation for transport involving equilibrium sorption and first-order degradation. We found that the MOM fitted breakthrough curves with tailing better than curve fitting. However, the MOM analysis requires complete breakthrough curves and relatively frequent data collection to ensure the accuracy of the moments obtained from the breakthrough data.
Fitting the post-keratoplasty cornea with hydrogel lenses.
Katsoulos, Costas; Nick, Vasileiou; Lefteris, Karageorgiadis; Theodore, Mousafeiropoulos
2009-02-01
We report two cases who have undergone penetrating keratoplasty (three eyes total), and who were fitted with hydrogel lenses. In the first case, a 28-year-old male presented with an interest in contact lens fitting. He had undergone corneal transplantation in both eyes, about 5 years ago. After topographies and trial fitting were performed, it was decided to be fitted with reverse geometry hydrogel lenses, due to the globular geometry of the cornea, the resultant instability of RGPs, and personal preference. In the second case, a 26-year-old female who had also penetrating keratoplasty was fitted with a hydrogel toric lens of high cylinder in the right eye. The final hydrogel lenses for the first subject incorporated a custom tricurve design, in which the second curve was steeper than the base curve and the third curve flatter than the second but still steeper than the first. Visual acuity was 6/7.5 RE and a mediocre 6/15 LE (OU 6/7.5). The second subject achieved 6/4.5 acuity RE with the high cylinder hydrogel toric lens. In corneas exhibiting extreme protrusion, such as keratoglobus and some cases after penetrating keratoplasty, curvatures are so extreme and the cornea so globular leading to specific fitting options: sclerals, small diameter RGPs and reverse geometry hydrogel lenses, in order to improve lens and optical stability. In selected cases such as the above, large diameter inverse geometry RGP may be fitted only if the eyelid shape and tension permits so. The first case demonstrates that the option of hydrogel lenses is viable when the patient has no interest in RGPs and in certain cases can improve vision to satisfactory levels. In other cases, graft toricity might be so high that the practitioner will need to employ hydrogel torics with large amounts of cylinder in order to correct vision. In such cases, the patient should be closely monitored in order to avoid complications from hypoxia.
Experimental characterization of wingtip vortices in the near field using smoke flow visualizations
NASA Astrophysics Data System (ADS)
Serrano-Aguilera, J. J.; García-Ortiz, J. Hermenegildo; Gallardo-Claros, A.; Parras, L.; del Pino, C.
2016-08-01
In order to predict the axial development of the wingtip vortices strength, an accurate theoretical model is required. Several experimental techniques have been used to that end, e.g. PIV or hot-wire anemometry, but they imply a significant cost and effort. For this reason, we have performed experiments using the smoke-wire technique to visualize smoke streaks in six planes perpendicular to the main stream flow direction. Using this visualization technique, we obtained quantitative information regarding the vortex velocity field by means of Batchelor's model for two chord-based Reynolds numbers, Re_c=3.33× 10^4 and 10^5. Therefore, this theoretical vortex model has been introduced in the integration of ordinary differential equations which describe the temporal evolution of streak lines as function of two parameters: the swirl number, S, and the virtual axial origin, overline{z_0}. We have applied two different procedures to minimize the distance between experimental and theoretical flow patterns: individual curve fitting at six different control planes in the streamwise direction and the global curve fitting which corresponds to all the control planes simultaneously. Both sets of results have been compared with those provided by del Pino et al. (Phys Fluids 23(013):602, 2011b. doi: 10.1063/1.3537791), finding good agreement. Finally, we have observed a weak influence of the Reynolds number on the values S and overline{z_0} at low-to-moderate Re_c. This experimental technique is proposed as a low cost alternative to characterize wingtip vortices based on flow visualizations.
On the design and optimisation of new fractal antenna using PSO
NASA Astrophysics Data System (ADS)
Rani, Shweta; Singh, A. P.
2013-10-01
An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
NASA Astrophysics Data System (ADS)
Katz, Harley; Lelli, Federico; McGaugh, Stacy S.; Di Cintio, Arianna; Brook, Chris B.; Schombert, James M.
2017-04-01
Cosmological N-body simulations predict dark matter (DM) haloes with steep central cusps (e.g. NFW). This contradicts observations of gas kinematics in low-mass galaxies that imply the existence of shallow DM cores. Baryonic processes such as adiabatic contraction and gas outflows can, in principle, alter the initial DM density profile, yet their relative contributions to the halo transformation remain uncertain. Recent high-resolution, cosmological hydrodynamic simulations by Di Cintio et al. (DC14) predict that inner density profiles depend systematically on the ratio of stellar-to-DM mass (M*/Mhalo). Using a Markov Chain Monte Carlo approach, we test the NFW and the M*/Mhalo-dependent DC14 halo models against a sample of 147 galaxy rotation curves from the new Spitzer Photometry and Accurate Rotation Curves data set. These galaxies all have extended H I rotation curves from radio interferometry as well as accurate stellar-mass-density profiles from near-infrared photometry. The DC14 halo profile provides markedly better fits to the data compared to the NFW profile. Unlike NFW, the DC14 halo parameters found in our rotation-curve fits naturally fall within two standard deviations of the mass-concentration relation predicted by Λ cold dark matter (ΛCDM) and the stellar mass-halo mass relation inferred from abundance matching with few outliers. Halo profiles modified by baryonic processes are therefore more consistent with expectations from ΛCDM cosmology and provide better fits to galaxy rotation curves across a wide range of galaxy properties than do halo models that neglect baryonic physics. Our results offer a solution to the decade long cusp-core discrepancy.
NASA Astrophysics Data System (ADS)
Mandel, Kaisey S.; Scolnic, Daniel M.; Shariff, Hikmatali; Foley, Ryan J.; Kirshner, Robert P.
2017-06-01
Conventional Type Ia supernova (SN Ia) cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (M B versus B - V) slope {β }{int} differs from the host galaxy dust law R B , this convolution results in a specific curve of mean extinguished absolute magnitude versus apparent color. The derivative of this curve smoothly transitions from {β }{int} in the blue tail to R B in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope {β }{app} between {β }{int} and R B . We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a data set of SALT2 optical light curve fits of 248 nearby SNe Ia at z< 0.10. The conventional linear fit gives {β }{app}≈ 3. Our model finds {β }{int}=2.3+/- 0.3 and a distinct dust law of {R}B=3.8+/- 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ˜0.10 mag in the tails of the apparent color distribution. Finally, we extend our model to examine the SN Ia luminosity-host mass dependence in terms of intrinsic and dust components.
Infectivity of RNA from inactivated poliovirus.
Nuanualsuwan, Suphachai; Cliver, Dean O
2003-03-01
During inactivation of poliovirus type 1 (PV-1) by exposure to UV, hypochlorite, and heat (72 degrees C), the infectivity of the virus was compared with that of its RNA. DEAE-dextran (1-mg/ml concentration in Dulbecco's modified Eagle medium buffered with 0.05 M Tris, pH 7.4) was used to facilitate transfecting PV-1 RNA into FRhK-4 host cells. After interaction of PV-1 RNA with cell monolayer at room temperature (21 to 22 degrees C) for 20 min, the monolayers were washed with 5 ml of Hanks balanced salt solution. The remainder of the procedure was the same as that for the conventional plaque technique, which was also used for quantifying the PV-1 whole-particle infectivity. Plaque formation by extracted RNA was approximately 100,000-fold less efficient than that by whole virions. The slopes of best-fit regression lines of inactivation curves for virion infectivity and RNA infectivity were compared to determine the target of inactivation. For UV and hypochlorite inactivation the slopes of inactivation curves of virion infectivity and RNA infectivity were not statistically different. However, the difference of slopes of inactivation curves of virion infectivity and RNA infectivity was statistically significant for thermal inactivation. The results of these experiments indicate that viral RNA is a primary target of UV and hypochlorite inactivations but that the sole target of thermal inactivation is the viral capsid.
Infectivity of RNA from Inactivated Poliovirus
Nuanualsuwan, Suphachai; Cliver, Dean O.
2003-01-01
During inactivation of poliovirus type 1 (PV-1) by exposure to UV, hypochlorite, and heat (72°C), the infectivity of the virus was compared with that of its RNA. DEAE-dextran (1-mg/ml concentration in Dulbecco's modified Eagle medium buffered with 0.05 M Tris, pH 7.4) was used to facilitate transfecting PV-1 RNA into FRhK-4 host cells. After interaction of PV-1 RNA with cell monolayer at room temperature (21 to 22°C) for 20 min, the monolayers were washed with 5 ml of Hanks balanced salt solution. The remainder of the procedure was the same as that for the conventional plaque technique, which was also used for quantifying the PV-1 whole-particle infectivity. Plaque formation by extracted RNA was approximately 100,000-fold less efficient than that by whole virions. The slopes of best-fit regression lines of inactivation curves for virion infectivity and RNA infectivity were compared to determine the target of inactivation. For UV and hypochlorite inactivation the slopes of inactivation curves of virion infectivity and RNA infectivity were not statistically different. However, the difference of slopes of inactivation curves of virion infectivity and RNA infectivity was statistically significant for thermal inactivation. The results of these experiments indicate that viral RNA is a primary target of UV and hypochlorite inactivations but that the sole target of thermal inactivation is the viral capsid. PMID:12620852
NASA Astrophysics Data System (ADS)
Syrejshchikova, T. I.; Gryzunov, Yu. A.; Smolina, N. V.; Komar, A. A.; Uzbekov, M. G.; Misionzhnik, E. J.; Maksimova, N. M.
2010-05-01
The efficiency of the therapy of psychiatric diseases is estimated using the fluorescence measurements of the conformational changes of human serum albumin in the course of medical treatment. The fluorescence decay curves of the CAPIDAN probe (N-carboxyphenylimide of the dimethylaminonaphthalic acid) in the blood serum are measured. The probe is specifically bound to the albumin drug binding sites and exhibits fluorescence as a reporter ligand. A variation in the conformation of the albumin molecule substantially affects the CAPIDAN fluorescence decay curve on the subnanosecond time scale. A subnanosecond pulsed laser or a Pico-Quant LED excitation source and a fast photon detector with a time resolution of about 50 ps are used for the kinetic measurements. The blood sera of ten patients suffering from depression and treated at the Institute of Psychiatry were preliminary clinically tested. Blood for analysis was taken from each patient prior to the treatment and on the third week of treatment. For ten patients, the analysis of the fluorescence decay curves of the probe in the blood serum using the three-exponential fitting shows that the difference between the amplitudes of the decay function corresponding to the long-lived (9 ns) fluorescence of the probe prior to and after the therapeutic procedure reliably differs from zero at a significance level of 1% ( p < 0.01).
Peripheral absolute threshold spectral sensitivity in retinitis pigmentosa.
Massof, R W; Johnson, M A; Finkelstein, D
1981-01-01
Dark-adapted spectral sensitivities were measured in the peripheral retinas of 38 patients diagnosed as having typical retinitis pigmentosa (RP) and in 3 normal volunteers. The patients included those having autosomal dominant and autosomal recessive inheritance patterns. Results were analysed by comparisons with the CIE standard scotopic spectral visibility function and with Judd's modification of the photopic spectral visibility function, with consideration of contributions from changes in spectral transmission of preretinal media. The data show 3 general patterns. One group of patients had absolute threshold spectral sensitivities that were fit by Judd's photopic visibility curve. Absolute threshold spectral sensitivities for a second group of patients were fit by a normal scotopic spectral visibility curve. The third group of patients had absolute threshold spectral sensitivities that were fit by a combination of scotopic and photopic spectral visibility curves. The autosomal dominant and autosomal recessive modes of inheritance were represented in each group of patients. These data indicate that RP patients have normal rod and/or cone spectral sensitivities, and support the subclassification of patients described previously by Massof and Finkelstein. PMID:7459312
10 CFR 26.27 - Written policy and procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Program Elements § 26.27 Written policy and... respond to an emergency, the procedure must— (A) Require a determination of fitness by breath alcohol... require him or her to be subject to this subpart, if the results of the determination of fitness indicate...
10 CFR 26.27 - Written policy and procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Program Elements § 26.27 Written policy and... respond to an emergency, the procedure must— (A) Require a determination of fitness by breath alcohol... require him or her to be subject to this subpart, if the results of the determination of fitness indicate...
Aleatory Uncertainty and Scale Effects in Computational Damage Models for Failure and Fragmentation
2014-09-01
larger specimens, small specimens have, on average, higher strengths. Equivalently, because curves for small specimens fall below those of larger...the material strength associated with each realization parameter R in Equation (7), and strength distribution curves associated with multiple...effects in brittle media [58], which applies micromorphological dimensional analysis to obtain a universal curve which closely fits rate-dependent
1985-05-01
distribution, was evaluation of phase shift through best fit of assumed to be the beam response to the microwave theoretical curves and experimental...vibration sidebands o Acceleration as shown in the lower calculated curve . o High-Temperature Exposure o Thermal Vacuum Two of the curves show actual phase ...conclude that the method to measure the phase noise with spectrum estimation is workable, and it has no principle limitation. From the curve it has been
McCoul, Edward D; Singh, Ameet; Anand, Vijay K; Tabaee, Abtin
2012-04-01
The surgical management options for eustachian tube dysfunction have historically been limited. The goal of the current study was to evaluate the technical considerations, learning curve, and potential barriers for balloon dilation of the eustachian tube (BDET) as an alternative treatment modality. Prospective preclinical trial of BDET in a cadaver model. A novel balloon catheter device was used for eustachian tube dilation. Twenty-four BDET procedures were performed by three independent rhinologists with no prior experience with the procedure (eight procedures per surgeon). The duration and number of attempts of the individual steps and overall procedure were recorded. Endoscopic examination of the eustachian tube was performed after each procedure, and the surgeon was asked to rate the subjective difficulty on a five-point scale. Successful completion of the procedure occurred in each case. The overall mean duration of the procedure was 284 seconds, and a mean number of 1.15 attempts were necessary to perform the individual steps. The mean subjective procedure difficulty was noted as somewhat easy. Statistically shorter duration and subjectively easier procedure were noted in the second compared to the first half of the series, indicating a favorable learning curve. Linear fissuring within the eustachian tube lumen without submucosal disruption (nine procedures, 37%) and with submucosal disruption (five procedures, 21%) were noted. The significance of these physical findings is unclear. Preclinical testing of BDET is associated with favorable duration, learning curve, and overall ease of completion. Clinical trials are necessary to evaluate safety and efficacy. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
The effect of dimethylsulfoxide on the water transport response of rat hepatocytes during freezing.
Smith, D J; Schulte, M; Bischof, J C
1998-10-01
Successful improvement of cryopreservation protocols for cells in suspension requires knowledge of how such cells respond to the biophysical stresses of freezing (intracellular ice formation, water transport) while in the presence of a cryoprotective agent (CPA). This work investigates the biophysical water transport response in a clinically important cell type--isolated hepatocytes--during freezing in the presence of dimethylsulfoxide (DMSO). Sprague-Dawley rat liver hepatocytes were frozen in Williams E media supplemented with 0, 1, and 2 M DMSO, at rates of 5, 10, and 50 degrees C/min. The water transport was measured by cell volumetric changes as assessed by cryomicroscopy and image analysis. Assuming that water is the only species transported under these conditions, a water transport model of the form dV/dT = f(Lpg([CPA]), ELp([CPA]), T(t)) was curve-fit to the experimental data to obtain the biophysical parameters of water transport--the reference hydraulic permeability (Lpg) and activation energy of water transport (ELp)--for each DMSO concentration. These parameters were estimated two ways: (1) by curve-fitting the model to the average volume of the pooled cell data, and (2) by curve-fitting individual cell volume data and averaging the resulting parameters. The experimental data showed that less dehydration occurs during freezing at a given rate in the presence of DMSO at temperatures between 0 and -10 degrees C. However, dehydration was able to continue at lower temperatures (< -10 degrees C) in the presence of DMSO. The values of Lpg and ELp obtained using the individual cell volume data both decreased from their non-CPA values--4.33 x 10(-13) m3/N-s (2.69 microns/min-atm) and 317 kJ/mol (75.9 kcal/mol), respectively--to 0.873 x 10(-13) m3/N-s (0.542 micron/min-atm) and 137 kJ/mol (32.8 kcal/mol), respectively, in 1 M DMSO and 0.715 x 10(-13) m3/N-s (0.444 micron/min-atm) and 107 kJ/mol (25.7 kcal/mol), respectively, in 2 M DMSO. The trends in the pooled volume values for Lpg and ELp were very similar, but the overall fit was considered worse than for the individual volume parameters. A unique way of presenting the curve-fitting results supports a clear trend of reduction of both biophysical parameters in the presence of DMSO, and no clear trend in cooling rate dependence of the biophysical parameters. In addition, these results suggest that close proximity of the experimental cell volume data to the equilibrium volume curve may significantly reduce the efficiency of the curve-fitting process.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Three-dimension reconstruction based on spatial light modulator
NASA Astrophysics Data System (ADS)
Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu
2011-02-01
Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .
Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.
Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335
Comparison between two scalar field models using rotation curves of spiral galaxies
NASA Astrophysics Data System (ADS)
Fernández-Hernández, Lizbeth M.; Rodríguez-Meza, Mario A.; Matos, Tonatiuh
2018-04-01
Scalar fields have been used as candidates for dark matter in the universe, from axions with masses ∼ 10-5eV until ultra-light scalar fields with masses ∼ Axions behave as cold dark matter while the ultra-light scalar fields galaxies are Bose-Einstein condensate drops. The ultra-light scalar fields are also called scalar field dark matter model. In this work we study rotation curves for low surface brightness spiral galaxies using two scalar field models: the Gross-Pitaevskii Bose-Einstein condensate in the Thomas-Fermi approximation and a scalar field solution of the Klein-Gordon equation. We also used the zero disk approximation galaxy model where photometric data is not considered, only the scalar field dark matter model contribution to rotation curve is taken into account. From the best-fitting analysis of the galaxy catalog we use, we found the range of values of the fitting parameters: the length scale and the central density. The worst fitting results (values of χ red2 much greater than 1, on the average) were for the Thomas-Fermi models, i.e., the scalar field dark matter is better than the Thomas- Fermi approximation model to fit the rotation curves of the analysed galaxies. To complete our analysis we compute from the fitting parameters the mass of the scalar field models and two astrophysical quantities of interest, the dynamical dark matter mass within 300 pc and the characteristic central surface density of the dark matter models. We found that the value of the central mass within 300 pc is in agreement with previous reported results, that this mass is ≈ 107 M ⊙/pc2, independent of the dark matter model. And, on the contrary, the value of the characteristic central surface density do depend on the dark matter model.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity
NASA Astrophysics Data System (ADS)
Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md
2017-08-01
This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.
Craniofacial Reconstruction Using Rational Cubic Ball Curves
Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan
2015-01-01
This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
Early-Time Observations of the GRB 050319 Optical Transient
NASA Astrophysics Data System (ADS)
Quimby, R. M.; Rykoff, E. S.; Yost, S. A.; Aharonian, F.; Akerlof, C. W.; Alatalo, K.; Ashley, M. C. B.; Göǧüş, E.; Güver, T.; Horns, D.; Kehoe, R. L.; Kιzιloǧlu, Ü.; Mckay, T. A.; Özel, M.; Phillips, A.; Schaefer, B. E.; Smith, D. A.; Swan, H. F.; Vestrand, W. T.; Wheeler, J. C.; Wren, J.
2006-03-01
We present the unfiltered ROTSE-III light curve of the optical transient associated with GRB 050319 beginning 4 s after the cessation of γ-ray activity. We fit a power-law function to the data using the revised trigger time given by Chincarini and coworkers, and a smoothly broken power-law to the data using the original trigger disseminated through the GCN notices. Including the RAPTOR data from Woźniak and coworkers, the best-fit power-law indices are α=-0.854+/-0.014 for the single power-law and α1=-0.364+0.020-0.019, α2=-0.881+0.030-0.031, with a break at tb=418+31-30 s for the smoothly broken fit. We discuss the fit results, with emphasis placed on the importance of knowing the true start time of the optical transient for this multipeaked burst. As Swift continues to provide prompt GRB locations, it becomes more important to answer the question, ``when does the afterglow begin?'' in order to correctly interpret the light curves.
Investigation of the Failure Modes in a Metal Matrix Composite under Thermal Cycling
1989-12-01
Material Characteristics. . .......... ... 76 Sectioning and SEN Photograp’... . ........ . 86 Residual Stress Analysis using .TCAN ... ....... 99 i VI...Specimen Fitted with Strain Gages ..... ........... 77 39. Modulus and Poisson’s Ratio versus Thermal Cycles . . 79 1 40 Stress /Strain Curve for Uncycled...Specimen .... ......... 82 1 41. Stress /Strain Curve for Specimen 8 (5250 Cycles) ..... .83 42. Comparison of Uncycled to Cycled Stress /Strain Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX
2015-06-15
Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less
Rotation curve for the Milky Way galaxy in conformal gravity
NASA Astrophysics Data System (ADS)
O'Brien, James G.; Moss, Robert J.
2015-05-01
Galactic rotation curves have proven to be the testing ground for dark matter bounds in galaxies, and our own Milky Way is one of many large spiral galaxies that must follow the same models. Over the last decade, the rotation of the Milky Way galaxy has been studied and extended by many authors. Since the work of conformal gravity has now successfully fit the rotation curves of almost 140 galaxies, we present here the fit to our own Milky Way. However, the Milky Way is not just an ordinary galaxy to append to our list, but instead provides a robust test of a fundamental difference of conformal gravity rotation curves versus standard cold dark matter models. It was shown by Mannheim and O'Brien that in conformal gravity, the presence of a quadratic potential causes the rotation curve to eventually fall off after its flat portion. This effect can currently be seen in only a select few galaxies whose rotation curve is studied well beyond a few multiples of the optical galactic scale length. Due to the recent work of Sofue et al and Kundu et al, the rotation curve of the Milky Way has now been studied to a degree where we can test the predicted fall off in the conformal gravity rotation curve. We find that - like the other galaxies already studied in conformal gravity - we obtain amazing agreement with rotational data and the prediction includes the eventual fall off at large distances from the galactic center.
NASA Astrophysics Data System (ADS)
Martí-Vidal, I.; Marcaide, J. M.; Alberdi, A.; Guirado, J. C.; Pérez-Torres, M. A.; Ros, E.
2011-02-01
We report on a simultaneous modelling of the expansion and radio light curves of the supernova SN1993J. We developed a simulation code capable of generating synthetic expansion and radio light curves of supernovae by taking into consideration the evolution of the expanding shock, magnetic fields, and relativistic electrons, as well as the finite sensitivity of the interferometric arrays used in the observations. Our software successfully fits all the available radio data of SN 1993J with a standard emission model for supernovae, which is extended with some physical considerations, such as an evolution in the opacity of the ejecta material, a radial decline in the magnetic fields within the radiating region, and a changing radial density profile for the circumstellar medium starting from day 3100 after the explosion.
PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less
Radial dependence of the dark matter distribution in M33
NASA Astrophysics Data System (ADS)
López Fune, E.; Salucci, P.; Corbelli, E.
2017-06-01
The stellar and gaseous mass distributions, as well as the extended rotation curve, in the nearby galaxy M33 are used to derive the radial distribution of dark matter density in the halo and to test cosmological models of galaxy formation and evolution. Two methods are examined to constrain the dark mass density profiles. The first method deals directly with fitting the rotation curve data in the range of galactocentric distances 0.24 ≤ r ≤ 22.72 kpc. Using the results of collisionless Λ cold dark matter numerical simulations, we confirm that the Navarro-Frenkel-White (NFW) dark matter profile provides a better fit to the rotation curve data than the cored Burkert profile (BRK) profile. The second method relies on the local equation of centrifugal equilibrium and on the rotation curve slope. In the aforementioned range of distances, we fit the observed velocity profile, using a function that has a rational dependence on the radius, and we derive the slope of the rotation curve. Then, we infer the effective matter densities. In the radial range 9.53 ≤ r ≤ 22.72 kpc, the uncertainties induced by the luminous matter (stars and gas) become negligible, because the dark matter density dominates, and we can determine locally the radial distribution of dark matter. With this second method, we tested the NFW and BRK dark matter profiles and we can confirm that both profiles are compatible with the data, even though in this case the cored BRK density profile provides a more reasonable value for the baryonic-to-dark matter ratio.
Translucency thresholds for dental materials.
Salas, Marianne; Lucena, Cristina; Herrera, Luis Javier; Yebra, Ana; Della Bona, Alvaro; Pérez, María M
2018-05-12
To determine the translucency acceptability and perceptibility thresholds for dental resin composites using CIEDE2000 and CIELAB color difference formulas. A 30-observer panel performed perceptibility and acceptability judgments on 50 pairs of resin composites discs (diameter: 10mm; thickness: 1mm). Disc pair differences for the Translucency Parameter (ΔTP) were calculated using both color difference formulas (ΔTP 00 ranged from 0.11 to 7.98, and ΔTP ab ranged from 0.01 to 12.79). A Takagi-Sugeno-Kang (TSK) Fuzzy Approximation was used as fitting procedure. From the resultant fitting curves, the 95% confidence intervals were estimated and the 50:50% translucency perceptibility and acceptability thresholds (TPT and TAT) were calculated. Differences between thresholds were statistically analyzed using Student t tests (α=0.05). CIEDE2000 50:50% TPT was 0.62 and TAT was 2.62. Corresponding CIELAB values were 1.33 and 4.43, respectively. Translucency perceptibility and acceptability thresholds were significantly different using both color difference formulas (p=0.01 for TPT and p=0.005 for TAT). CIEDE2000 color difference formula provided a better data fit than CIELAB formula. The visual translucency difference thresholds determined with CIEDE2000 color difference formula can serve as reference values in the selection of resin composites and evaluation of its clinical performance. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
Data Validation in the Kepler Science Operations Center Pipeline
NASA Technical Reports Server (NTRS)
Wu, Hayley; Twicken, Joseph D.; Tenenbaum, Peter; Clarke, Bruce D.; Li, Jie; Quintana, Elisa V.; Allen, Christopher; Chandrasekaran, Hema; Jenkins, Jon M.; Caldwell, Douglas A.;
2010-01-01
We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed. Keywords: photometry, data validation, Kepler, Earth-size planets
Artifact detection in electrodermal activity using sparse recovery
NASA Astrophysics Data System (ADS)
Kelsey, Malia; Palumbo, Richard Vincent; Urbaneja, Alberto; Akcakaya, Murat; Huang, Jeannie; Kleckner, Ian R.; Barrett, Lisa Feldman; Quigley, Karen S.; Sejdic, Ervin; Goodwin, Matthew S.
2017-05-01
Electrodermal Activity (EDA) - a peripheral index of sympathetic nervous system activity - is a primary measure used in psychophysiology. EDA is widely accepted as an indicator of physiological arousal, and it has been shown to reveal when psychologically novel events occur. Traditionally, EDA data is collected in controlled laboratory experiments. However, recent developments in wireless biosensing have led to an increase in out-of-lab studies. This transition to ambulatory data collection has introduced challenges. In particular, artifacts such as wearer motion, changes in temperature, and electrical interference can be misidentified as true EDA responses. The inability to distinguish artifact from signal hinders analyses of ambulatory EDA data. Though manual procedures for identifying and removing EDA artifacts exist, they are time consuming - which is problematic for the types of longitudinal data sets represented in modern ambulatory studies. This manuscript presents a novel technique to automatically identify and remove artifacts in EDA data using curve fitting and sparse recovery methods. Our method was evaluated using labeled data to determine the accuracy of artifact identification. Procedures, results, conclusions, and future directions are presented.
Fernández-Varea, J M; Andreo, P; Tabata, T
1996-07-01
Average penetration depths and detour factors of 1-50 MeV electrons in water and plastic materials have been computed by means of analytical calculation, within the continuous-slowing-down approximation and including multiple scattering, and using the Monte Carlo codes ITS and PENELOPE. Results are compared to detour factors from alternative definitions previously proposed in the literature. Different procedures used in low-energy electron-beam dosimetry to convert ranges and depths measured in plastic phantoms into water-equivalent ranges and depths are analysed. A new simple and accurate scaling method, based on Monte Carlo-derived ratios of average electron penetration depths and thus incorporating the effect of multiple scattering, is presented. Data are given for most plastics used in electron-beam dosimetry together with a fit which extends the method to any other low-Z plastic material. A study of scaled depth-dose curves and mean energies as a function of depth for some plastics of common usage shows that the method improves the consistency and results of other scaling procedures in dosimetry with electron beams at therapeutic energies.
A probabilistic seismic risk assessment procedure for nuclear power plants: (I) Methodology
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2011-01-01
A new procedure for probabilistic seismic risk assessment of nuclear power plants (NPPs) is proposed. This procedure modifies the current procedures using tools developed recently for performance-based earthquake engineering of buildings. The proposed procedure uses (a) response-based fragility curves to represent the capacity of structural and nonstructural components of NPPs, (b) nonlinear response-history analysis to characterize the demands on those components, and (c) Monte Carlo simulations to determine the damage state of the components. The use of response-rather than ground-motion-based fragility curves enables the curves to be independent of seismic hazard and closely related to component capacity. The use of Monte Carlo procedure enables the correlation in the responses of components to be directly included in the risk assessment. An example of the methodology is presented in a companion paper to demonstrate its use and provide the technical basis for aspects of the methodology. ?? 2011 Published by Elsevier B.V.
The mass of the black hole in 1A 0620-00, revisiting the ellipsoidal light curve modelling
NASA Astrophysics Data System (ADS)
van Grunsven, Theo F. J.; Jonker, Peter G.; Verbunt, Frank W. M.; Robinson, Edward L.
2017-12-01
The mass distribution of stellar-mass black holes can provide important clues to supernova modelling, but observationally it is still ill constrained. Therefore, it is of importance to make black hole mass measurements as accurate as possible. The X-ray transient 1A 0620-00 is well studied, with a published black hole mass of 6.61 ± 0.25 M⊙, based on an orbital inclination i of 51.0° ± 0.9°. This was obtained by Cantrell et al. (2010) as an average of independent fits to V-, I- and H-band light curves. In this work, we perform an independent check on the value of i by re-analysing existing YALO/SMARTS V-, I- and H-band photometry, using different modelling software and fitting strategy. Performing a fit to the three light curves simultaneously, we obtain a value for i of 54.1° ± 1.1°, resulting in a black hole mass of 5.86 ± 0.24 M⊙. Applying the same model to the light curves individually, we obtain 58.2° ± 1.9°, 53.6° ± 1.6° and 50.5° ± 2.2° for V-, I- and H-band, respectively, where the differences in best-fitting i are caused by the contribution of the residual accretion disc light in the three different bands. We conclude that the mass determination of this black hole may still be subject to systematic effects exceeding the statistical uncertainty. Obtaining more accurate masses would be greatly helped by continuous phase-resolved spectroscopic observations simultaneous with photometry.
NASA Astrophysics Data System (ADS)
Milani, G.; Milani, F.
A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.
Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggieri, Ruggero, E-mail: ruggieri.ruggero@gmail.com; Stavreva, Nadejda; Naccarato, Stefania
2012-08-01
Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as duemore » to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.« less
Mathematical and Statistical Software Index.
1986-08-01
geometric) mean HMEAN - harmonic mean MEDIAN - median MODE - mode QUANT - quantiles OGIVE - distribution curve IQRNG - interpercentile range RANGE ... range mutliphase pivoting algorithm cross-classification multiple discriminant analysis cross-tabul ation mul tipl e-objecti ve model curve fitting...Statistics). .. .. .... ...... ..... ...... ..... .. 21 *RANGEX (Correct Correlations for Curtailment of Range ). .. .. .... ...... ... 21 *RUMMAGE II (Analysis
A Software Tool for the Rapid Analysis of the Sintering Behavior of Particulate Bodies
2017-11-01
bounded by a region that the user selects via cross hairs . Future plot analysis features, such as more complicated curve fitting and modeling functions...German RM. Grain growth behavior of tungsten heavy alloys based on the master sintering curve concept. Metallurgical and Materials Transactions A
The utility of laboratory animal data in toxicology depends upon the ability to generalize the results quantitatively to humans. To compare the acute behavioral effects of inhaled toluene in humans to those in animals, dose-effect curves were fitted by meta-analysis of published...
Annual variation in the atmospheric radon concentration in Japan.
Kobayashi, Yuka; Yasuoka, Yumi; Omori, Yasutaka; Nagahama, Hiroyuki; Sanada, Tetsuya; Muto, Jun; Suzuki, Toshiyuki; Homma, Yoshimi; Ihara, Hayato; Kubota, Kazuhito; Mukai, Takahiro
2015-08-01
Anomalous atmospheric variations in radon related to earthquakes have been observed in hourly exhaust-monitoring data from radioisotope institutes in Japan. The extraction of seismic anomalous radon variations would be greatly aided by understanding the normal pattern of variation in radon concentrations. Using atmospheric daily minimum radon concentration data from five sampling sites, we show that a sinusoidal regression curve can be fitted to the data. In addition, we identify areas where the atmospheric radon variation is significantly affected by the variation in atmospheric turbulence and the onshore-offshore pattern of Asian monsoons. Furthermore, by comparing the sinusoidal regression curve for the normal annual (seasonal) variations at the five sites to the sinusoidal regression curve for a previously published dataset of radon values at the five Japanese prefectures, we can estimate the normal annual variation pattern. By fitting sinusoidal regression curves to the previously published dataset containing sites in all Japanese prefectures, we find that 72% of the Japanese prefectures satisfy the requirements of the sinusoidal regression curve pattern. Using the normal annual variation pattern of atmospheric daily minimum radon concentration data, these prefectures are suitable areas for obtaining anomalous radon variations related to earthquakes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Runoff potentiality of a watershed through SCS and functional data analysis technique.
Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique
Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911
NASA Astrophysics Data System (ADS)
Li, Zhengxiang; Gonzalez, J. E.; Yu, Hongwei; Zhu, Zong-Hong; Alcaniz, J. S.
2016-02-01
We apply two methods, i.e., the Gaussian processes and the nonparametric smoothing procedure, to reconstruct the Hubble parameter H (z ) as a function of redshift from 15 measurements of the expansion rate obtained from age estimates of passively evolving galaxies. These reconstructions enable us to derive the luminosity distance to a certain redshift z , calibrate the light-curve fitting parameters accounting for the (unknown) intrinsic magnitude of type Ia supernova (SNe Ia), and construct cosmological model-independent Hubble diagrams of SNe Ia. In order to test the compatibility between the reconstructed functions of H (z ), we perform a statistical analysis considering the latest SNe Ia sample, the so-called joint light-curve compilation. We find that, for the Gaussian processes, the reconstructed functions of Hubble parameter versus redshift, and thus the following analysis on SNe Ia calibrations and cosmological implications, are sensitive to prior mean functions. However, for the nonparametric smoothing method, the reconstructed functions are not dependent on initial guess models, and consistently require high values of H0, which are in excellent agreement with recent measurements of this quantity from Cepheids and other local distance indicators.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
Monte Carlo simulation of ò ó coincidence system using plastic scintillators in 4àgeometry
NASA Astrophysics Data System (ADS)
Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.
2007-09-01
A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.
Can Tooth Preparation Design Affect the Fit of CAD/CAM Restorations?
Roperto, Renato Cassio; Oliveira, Marina Piolli; Porto, Thiago Soares; Ferreira, Lais Alaberti; Melo, Lucas Simino; Akkus, Anna
2017-03-01
The purpose of this study was to evaluate if the marginal fit of computer-aided design and computer-aided manufacturing (CAD/CAM) restorations manufactured with CAD/CAM systems can be affected by different tooth preparation designs. Twenty-six typodont (plastic) teeth were divided into two groups (n = 13) according to the occlusal curvature of the tooth preparation. These were the group 1 (control group) (flat occlusal design) and group 2 (curved occlusal design). Scanning of the preparations was performed, and crowns were milled using ceramic blocks. Blocks were cemented using epoxy glue on the pulpal floor only, and finger pressure was applied for 1 minute. On completion of the cementation step, poor fits between the restoration and abutment were measured by microphotography and the silicone replica technique using light-body silicon material on mesial, distal, buccal, and lingual surfaces. Two-way ANOVA analysis did not reveal a statistical difference between flat (83.61 ± 50.72) and curved (79.04 ± 30.97) preparation designs. Buccal, mesial, lingual, and distal sites on the curved design preparation showed less of a gap when compared with flat design. No difference was found on flat preparations among mesial, buccal, and distal sites (P < .05). The lingual aspect had no difference from the distal side but showed a statistically significant difference from mesial and buccal (P < .05). Difference in occlusal design did not significantly impact the marginal fit. Marginal fit was significantly affected by the location of the margin; lingual and distal locations exhibited greater margin gap values compared with buccal and mesial sites regardless of the preparation design.
NASA Astrophysics Data System (ADS)
Michel, Claude; Andréassian, Vazken; Perrin, Charles
2005-02-01
This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.
Permutation tests for goodness-of-fit testing of mathematical models to experimental data.
Fişek, M Hamit; Barlas, Zeynep
2013-03-01
This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.
Error reduction in three-dimensional metrology combining optical and touch probe data
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2010-08-01
Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.
SU-E-T-223: Computed Radiography Dose Measurements of External Radiotherapy Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aberle, C; Kapsch, R
2015-06-15
Purpose: To obtain quantitative, two-dimensional dose measurements of external radiotherapy beams with a computed radiography (CR) system and to derive volume correction factors for ionization chambers in small fields. Methods: A commercial Kodak ACR2000i CR system with Kodak Flexible Phosphor Screen HR storage foils was used. Suitable measurement conditions and procedures were established. Several corrections were derived, including image fading, length-scale corrections and long-term stability corrections. Dose calibration curves were obtained for cobalt, 4 MV, 8 MV and 25 MV photons, and for 10 MeV, 15 MeV and 18 MeV electrons in a water phantom. Inherent measurement inhomogeneities were studiedmore » as well as directional dependence of the response. Finally, 2D scans with ionization chambers were directly compared to CR measurements, and volume correction factors were derived. Results: Dose calibration curves (0.01 Gy to 7 Gy) were obtained for multiple photon and electron beam qualities. For each beam quality, the calibration curves can be described by a single fit equation over the whole dose range. The energy dependence of the dose response was determined. The length scale on the images was adjusted scan-by-scan, typically by 2 percent horizontally and by 3 percent vertically. The remaining inhomogeneities after the system’s standard calibration procedure were corrected for. After correction, the homogeneity is on the order of a few percent. The storage foils can be rotated by up to 30 degrees without a significant effect on the measured signal. First results on the determination of volume correction factors were obtained. Conclusion: With CR, quantitative, two-dimensional dose measurements with a high spatial resolution (sub-mm) can be obtained over a large dose range. In order to make use of these advantages, several calibrations, corrections and supporting measurements are needed. This work was funded by the European Metrology Research Programme (EMRP) project HLT09 MetrExtRT Metrology for Radiotherapy using Complex Radiation Fields.« less
High pressure melting curve of platinum up to 35 GPa
NASA Astrophysics Data System (ADS)
Patel, Nishant N.; Sunder, Meenakshi
2018-04-01
Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.
[Keratoconus special soft contact lens fitting].
Yamazaki, Ester Sakae; da Silva, Vanessa Cristina Batista; Morimitsu, Vagner; Sobrinho, Marcelo; Fukushima, Nelson; Lipener, César
2006-01-01
To evaluate the fitting and use of a soft contact lens in keratoconic patients. Retrospective study on 80 eyes of 66 patients, fitted with a special soft contact lens for keratoconus, at the Contact Lens Section of UNIFESP and private clinics. Keratoconus was classified according to degrees of disease severity by keratometric pattern. Age, gender, diagnosis, keratometry, visual acuity, spherical equivalent (SE), base curve and clinical indication were recorded. Of 66 patients (80 eyes) with keratoconus the mean age was 29 years, 51.5% were men and 48.5% women. According to the groups: 15.0% were incipient, 53.7% moderate, 26.3% advanced and 5.0% were severe. The majority of the eyes of patients using contact lenses (91.25%) achieved visual acuity better than 20/40. To 88 eyes 58% were tihed with lens with spherical power (mean -5.45 diopters) and 41% with spherocylinder power (from -0.5 to -5.00 cylindrical diopters). The most frequent base curve was 7.6 in 61% of the eyes. The main reasons for this special lens fitting were due to reduced tolerance and poor fitting pattern achieved with other lenses. The special soft contact lens is useful in fitting difficult keratoconic patients by offering comfort and improving visual rehabilitation that may allow more patients to postpone the need for corneal transplant.
Why "suboptimal" is optimal: Jensen's inequality and ectotherm thermal preferences.
Martin, Tara Laine; Huey, Raymond B
2008-03-01
Body temperature (T(b)) profoundly affects the fitness of ectotherms. Many ectotherms use behavior to control T(b) within narrow levels. These temperatures are assumed to be optimal and therefore to match body temperatures (Trmax) that maximize fitness (r). We develop an optimality model and find that optimal body temperature (T(o)) should not be centered at Trmax but shifted to a lower temperature. This finding seems paradoxical but results from two considerations relating to Jensen's inequality, which deals with how variance and skew influence integrals of nonlinear functions. First, ectotherms are not perfect thermoregulators and so experience a range of T(b). Second, temperature-fitness curves are asymmetric, such that a T(b) higher than Trmax depresses fitness more than will a T(b) displaced an equivalent amount below Trmax. Our model makes several predictions. The magnitude of the optimal shift (Trmax - To) should increase with the degree of asymmetry of temperature-fitness curves and with T(b) variance. Deviations should be relatively large for thermal specialists but insensitive to whether fitness increases with Trmax ("hotter is better"). Asymmetric (left-skewed) T(b) distributions reduce the magnitude of the optimal shift but do not eliminate it. Comparative data (insects, lizards) support key predictions. Thus, "suboptimal" is optimal.
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.
Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio
2016-05-19
Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
The light curve of SN 1987A revisited: constraining production masses of radioactive nuclides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitenzahl, Ivo R.; Timmes, F. X.; Magkotsios, Georgios, E-mail: ivo.seitenzahl@anu.edu.au
2014-09-01
We revisit the evidence for the contribution of the long-lived radioactive nuclides {sup 44}Ti, {sup 55}Fe, {sup 56}Co, {sup 57}Co, and {sup 60}Co to the UVOIR light curve of SN 1987A. We show that the V-band luminosity constitutes a roughly constant fraction of the bolometric luminosity between 900 and 1900 days, and we obtain an approximate bolometric light curve out to 4334 days by scaling the late time V-band data by a constant factor where no bolometric light curve data is available. Considering the five most relevant decay chains starting at {sup 44}Ti, {sup 55}Co, {sup 56}Ni, {sup 57}Ni, andmore » {sup 60}Co, we perform a least squares fit to the constructed composite bolometric light curve. For the nickel isotopes, we obtain best fit values of M({sup 56}Ni) = (7.1 ± 0.3) × 10{sup –2} M {sub ☉} and M({sup 57}Ni) = (4.1 ± 1.8) × 10{sup –3} M {sub ☉}. Our best fit {sup 44}Ti mass is M({sup 44}Ti) = (0.55 ± 0.17) × 10{sup –4} M {sub ☉}, which is in disagreement with the much higher (3.1 ± 0.8) × 10{sup –4} M {sub ☉} recently derived from INTEGRAL observations. The associated uncertainties far exceed the best fit values for {sup 55}Co and {sup 60}Co and, as a result, we only give upper limits on the production masses of M({sup 55}Co) < 7.2 × 10{sup –3} M {sub ☉} and M({sup 60}Co) < 1.7 × 10{sup –4} M {sub ☉}. Furthermore, we find that the leptonic channels in the decay of {sup 57}Co (internal conversion and Auger electrons) are a significant contribution and constitute up to 15.5% of the total luminosity. Consideration of the kinetic energy of these electrons is essential in lowering our best fit nickel isotope production ratio to [{sup 57}Ni/{sup 56}Ni] = 2.5 ± 1.1, which is still somewhat high but is in agreement with gamma-ray observations and model predictions.« less
K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents
ERIC Educational Resources Information Center
Gwanyama, Philip Wagala
2005-01-01
The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…
Noakes, Kimberley F.; Bissett, Ian P.; Pullan, Andrew J.; Cheng, Leo K.
2014-01-01
Three anatomically realistic meshes, suitable for finite element analysis, of the pelvic floor and anal canal regions have been developed to provide a framework with which to examine the mechanics, via finite element analysis of normal function within the pelvic floor. Two cadaver-based meshes were produced using the Visible Human Project (male and female) cryosection data sets, and a third mesh was produced based on MR image data from a live subject. The Visible Man (VM) mesh included 10 different pelvic structures while the Visible Woman and MRI meshes contained 14 and 13 structures respectively. Each image set was digitized and then finite element meshes were created using an iterative fitting procedure with smoothing constraints calculated from ‘L’-curves. These weights produced accurate geometric meshes of each pelvic structure with average Root Mean Square (RMS) fitting errors of less than 1.15 mm. The Visible Human cadaveric data provided high resolution images, however, the cadaveric meshes lacked the normal dynamic form of living tissue and suffered from artifacts related to postmortem changes. The lower resolution MRI mesh was able to accurately portray structure of the living subject and paves the way for dynamic, functional modeling. PMID:18317929
NASA Astrophysics Data System (ADS)
Tian, J.; Krauß, T.; d'Angelo, P.
2017-05-01
Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
Hot, cold, and annual reference atmospheres for Edwards Air Force Base, California (1975 version)
NASA Technical Reports Server (NTRS)
Johnson, D. L.
1975-01-01
Reference atmospheres pertaining to summer (hot), winter (cold), and mean annual conditions for Edwards Air Force Base, California, are presented from surface to 90 km altitude (700 km for the annual model). Computed values of pressure, kinetic temperature, virtual temperature, and density and relative differences percentage departure from the Edwards reference atmospheres, 1975 (ERA-75) of the atmospheric parameters versus altitude are tabulated in 250 m increments. Hydrostatic and gas law equations were used in conjunction with radiosonde and rocketsonde thermodynamic data in determining the vertical structure of these atmospheric models. The thermodynamic parameters were all subjected to a fifth degree least-squares curve-fit procedure, and the resulting coefficients were incorporated into Univac 1108 computer subroutines so that any quantity may be recomputed at any desired altitude using these subroutines.
Rapid fading of optical afterglows as evidence for beaming in gamma-ray bursts
NASA Astrophysics Data System (ADS)
Huang, Y. F.; Dai, Z. G.; Lu, T.
2000-03-01
Based on the refined dynamical model proposed by us earlier for beamed gamma -ray burst ejecta, we carry out detailed numerical procedure to study those gamma -ray bursts with rapidly fading afterglows (i.e., ~ t-2). It is found that optical afterglows from GRB 970228, 980326, 980519, 990123, 990510 and 991208 can be satisfactorily fitted if the gamma -ray burst ejecta are highly collimated, with a universal initial half opening angle theta_0 ~ 0.1. The obvious light curve break observed in GRB 990123 is due to the relativistic-Newtonian transition of the beamed ejecta, and the rapidly fading optical afterglows come from synchrotron emissions during the mildly relativistic and non-relativistic phases. We strongly suggest that the rapid fading of afterglows currently observed in some gamma -ray bursts is evidence for beaming in these cases.
Dynamic Analysis of Recalescence Process and Interface Growth of Eutectic Fe82B17Si1 Alloy
NASA Astrophysics Data System (ADS)
Fan, Y.; Liu, A. M.; Chen, Z.; Li, P. Z.; Zhang, C. H.
2018-03-01
By employing the glass fluxing technique in combination with cyclical superheating, the microstructural evolution of the undercooled Fe82B17Si1 alloy in the obtained undercooling range was studied. With increase in undercooling, a transition of cooling curves was detected from one recalescence to two recalescences, followed by one recalescence. The two types of cooling curves were fitted by the break equation and the Johnson-Mehl-Avrami-Kolmogorov model. Based on the cooling curves at different undercoolings, the recalescence rate was calculated by the multi-logistic growth model and the Boettinger-Coriel-Trivedi model. Both the recalescence features and the interface growth kinetics of the eutectic Fe82B17Si1 alloy were explored. The fitting results that were obtained using TEM (SAED), SEM and XRD were consistent with the changing rule of microstructures. Finally, the relationship between the microstructure and hardness was also investigated.
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
NASA Astrophysics Data System (ADS)
Szalai, Robert; Ehrhardt, David; Haller, George
2017-06-01
In a nonlinear oscillatory system, spectral submanifolds (SSMs) are the smoothest invariant manifolds tangent to linear modal subspaces of an equilibrium. Amplitude-frequency plots of the dynamics on SSMs provide the classic backbone curves sought in experimental nonlinear model identification. We develop here, a methodology to compute analytically both the shape of SSMs and their corresponding backbone curves from a data-assimilating model fitted to experimental vibration signals. This model identification utilizes Taken's delay-embedding theorem, as well as a least square fit to the Taylor expansion of the sampling map associated with that embedding. The SSMs are then constructed for the sampling map using the parametrization method for invariant manifolds, which assumes that the manifold is an embedding of, rather than a graph over, a spectral subspace. Using examples of both synthetic and real experimental data, we demonstrate that this approach reproduces backbone curves with high accuracy.
NASA Astrophysics Data System (ADS)
Meng, Xiao; Wang, Lai; Hao, Zhibiao; Luo, Yi; Sun, Changzheng; Han, Yanjun; Xiong, Bing; Wang, Jian; Li, Hongtao
2016-01-01
Efficiency droop is currently one of the most popular research problems for GaN-based light-emitting diodes (LEDs). In this work, a differential carrier lifetime measurement system is optimized to accurately determine carrier lifetimes (τ) of blue and green LEDs under different injection current (I). By fitting the τ-I curves and the efficiency droop curves of the LEDs according to the ABC carrier rate equation model, the impact of Auger recombination and carrier leakage on efficiency droop can be characterized simultaneously. For the samples used in this work, it is found that the experimental τ-I curves cannot be described by Auger recombination alone. Instead, satisfactory fitting results are obtained by taking both carrier leakage and carriers delocalization into account, which implies carrier leakage plays a more significant role in efficiency droop at high injection level.
ROC analysis of diagnostic performance in liver scintigraphy.
Fritz, S L; Preston, D F; Gallagher, J H
1981-02-01
Studies on the accuracy of liver scintigraphy for the detection of metastases were assembled from 38 sources in the medical literature. An ROC curve was fitted to the observed values of sensitivity and specificity using an algorithm developed by Ogilvie and Creelman. This ROC curve fitted the data better than average sensitivity and specificity values in each of four subsets of the data. For the subset dealing with Tc-99m sulfur colloid scintigraphy, performed for detection of suspected metastases and containing data on 2800 scans from 17 independent series, it was not possible to reject the hypothesis that interobserver variation was entirely due to the use of different decision thresholds by the reporting clinicians. Thus the ROC curve obtained is a reasonable baseline estimate of the performance potentially achievable in today's clinical setting. Comparison of new reports with these data is possible, but is limited by the small sample sizes in most reported series.
Goodford, P J; St-Louis, J; Wootton, R
1978-01-01
1. Oxygen dissociation curves have been measured for human haemoglobin solutions with different concentrations of the allosteric effectors 2,3-diphosphoglycerate, adenosine triphosphate and inositol hexaphosphate. 2. Each effector produces a concentration dependent right shift of the oxygen dissociation curve, but a point is reached where the shift is maximal and increasing the effector concentration has no further effect. 3. Mathematical models based on the Monod, Wyman & Changeux (1965) treatment of allosteric proteins have been fitted to the data. For each compound the simple two-state model and its extension to take account of subunit inequivalence were shown to be inadequate, and a better fit was obtained by allowing the effector to lower the oxygen affinity of the deoxy conformational state as well as binding preferentially to this conformation. PMID:722582
NASA Astrophysics Data System (ADS)
Repetto, P.; Martínez-García, E. E.; Rosado, M.; Gabbasov, R.
2018-06-01
In this paper, we derive a novel circular velocity relation for a test particle in a 3D gravitational potential applicable to every system of curvilinear coordinates, suitable to be reduced to orthogonal form. As an illustration of the potentiality of the determined circular velocity expression, we perform the rotation curves analysis of UGC 8490 and UGC 9753 and we estimate the total and dark matter mass of these two galaxies under the assumption that their respective dark matter haloes have spherical, prolate, and oblate spheroidal mass distributions. We employ stellar population synthesis models and the total H I density map to obtain the stellar and H I+He+metals rotation curves of both galaxies. The subtraction of the stellar plus gas rotation curves from the observed rotation curves of UGC 8490 and UGC 9753 generates the dark matter circular velocity curves of both galaxies. We fit the dark matter rotation curves of UGC 8490 and UGC 9753 through the newly established circular velocity formula specialized to the spherical, prolate, and oblate spheroidal mass distributions, considering the Navarro, Frenk, and White, Burkert, Di Cintio, Einasto, and Stadel dark matter haloes. Our principal findings are the following: globally, cored dark matter profiles Burkert and Einasto prevail over cuspy Navarro, Frenk, and White, and Di Cintio. Also, spherical/oblate dark matter models fit better the dark matter rotation curves of both galaxies than prolate dark matter haloes.
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
Bayesian Analysis of Longitudinal Data Using Growth Curve Models
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.
2007-01-01
Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
Educating about Sustainability while Enhancing Calculus
ERIC Educational Resources Information Center
Pfaff, Thomas J.
2011-01-01
We give an overview of why it is important to include sustainability in mathematics classes and provide specific examples of how to do this for a calculus class. We illustrate that when students use "Excel" to fit curves to real data, fundamentally important questions about sustainability become calculus questions about those curves. (Contains 5…
Lopes, Fernando B; da Silva, Marcelo C; Marques, Ednira G; McManus, Concepta M
2012-12-01
This study was undertaken to aim of estimating the genetic parameters and trends for asymptotic weight (A) and maturity rate (k) of Nellore cattle from northern Brazil. The data set was made available by the Brazilian Association of Zebu Breeders and collected between the years of 1997 and 2007. The Von Bertalanffy, Brody, Gompertz, and logistic nonlinear models were fitted by the Gauss-Newton method to weight-age data of 45,895 animals collected quarterly of the birth to 750 days old. The curve parameters were analyzed using the procedures GLM and CORR. The estimation of (co)variance components and genetic parameters was obtained using the MTDFREML software. The estimated heritability coefficients were 0.21 ± 0.013 and 0.25 ± 0.014 for asymptotic weight and maturity rate, respectively. This indicates that selection for any trait shall results in genetic progress in the herd. The genetic correlation between A and k was negative (-0.57 ± 0.03) and indicated that animals selected for high maturity rate shall result in low asymptotic weight. The Von Bertalanffy function is adequate to establish the mean growth patterns and to predict the adult weight of Nellore cattle. This model is more accurate in predicting the birth weight of these animals and has better overall fit. The prediction of adult weight using nonlinear functions can be accurate when growth curve parameters and their (co)variance components are estimated jointly. The model used in this study can be applied to the prediction of mature weight in herds where a portion of the animals are culled before they reach the adult age.
Improving Bedload Transport Predictions by Incorporating Hysteresis
NASA Astrophysics Data System (ADS)
Crowe Curran, J.; Gaeuman, D.
2015-12-01
The importance of unsteady flow on sediment transport rates has long been recognized. However, the majority of sediment transport models were developed under steady flow conditions that did not account for changing bed morphologies and sediment transport during flood events. More recent research has used laboratory data and field data to quantify the influence of hysteresis on bedload transport and adjust transport models. In this research, these new methods are combined to improve further the accuracy of bedload transport rate quantification and prediction. The first approach defined reference shear stresses for hydrograph rising and falling limbs, and used these values to predict total and fractional transport rates during a hydrograph. From this research, a parameter for improving transport predictions during unsteady flows was developed. The second approach applied a maximum likelihood procedure to fit a bedload rating curve to measurements from a number of different coarse bed rivers. Parameters defining the rating curve were optimized for values that maximized the conditional probability of producing the measured bedload transport rate. Bedload sample magnitude was fit to a gamma distribution, and the probability of collecting N particles in a sampler during a given time step was described with a Poisson probability density function. Both approaches improved estimates of total transport during large flow events when compared to existing methods and transport models. Recognizing and accounting for the changes in transport parameters over time frames on the order of a flood or flood sequence influences the choice of method for parameter calculation in sediment transport calculations. Those methods that more tightly link the changing flow rate and bed mobility have the potential to improve bedload transport rates.
Radiative Heating Methodology for the Huygens Probe
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth
2007-01-01
The radiative heating environment for the Huygens probe near peak heating conditions for Titan entry is investigated in this paper. The task of calculating the radiation-coupled flowfield, accounting for non-Boltzmann and non-optically thin radiation, is simplified to a rapid yet accurate calculation. This is achieved by using the viscous-shock layer (VSL) technique for the stagnation-line flowfield calculation and a modified smeared rotational band (SRB) model for the radiation calculation. These two methods provide a computationally efficient alternative to a Navier-Stokes flowfield and line-by-line radiation calculation. The results of the VSL technique are shown to provide an excellent comparison with the Navier-Stokes results of previous studies. It is shown that a conventional SRB approach is inadequate for the partially optically-thick conditions present in the Huygens shock-layer around the peak heating trajectory points. A simple modification is proposed to the SRB model that improves its accuracy in these partially optically-thick conditions. This modified approach, labeled herein as SRBC, is compared throughout this study with a detailed line-by-line (LBL) calculation and is shown to compare within 5% in all cases. The SRBC method requires many orders-of-magnitude less computational time than the LBL method, which makes it ideal for coupling to the flowfield. The application of a collisional-radiative (CR) model for determining the population of the CN electronic states, which govern the radiation for Huygens entry, is discussed and applied. The non-local absorption term in the CR model is formulated in terms of an escape factor, which is then curve-fit with temperature. Although the curve-fit is an approximation, it is shown to compare well with the exact escape factor calculation, which requires a computationally intensive iteration procedure.
Rosenberg, Justin F; Haulena, Martin; Phillips, Brianne E; Harms, Craig A; Lewbart, Gregory A; Lahner, Lesanna L; Papich, Mark G
2016-11-01
OBJECTIVE To determine population pharmacokinetics of enrofloxacin in purple sea stars (Pisaster ochraceus) administered an intracoelomic injection of enrofloxacin (5 mg/kg) or immersed in an enrofloxacin solution (5 mg/L) for 6 hours. ANIMALS 28 sea stars of undetermined age and sex. PROCEDURES The study had 2 phases. Twelve sea stars received an intracoelomic injection of enrofloxacin (5 mg/kg) or were immersed in an enrofloxacin solution (5 mg/L) for 6 hours during the injection and immersion phases, respectively. Two untreated sea stars were housed with the treated animals following enrofloxacin administration during both phases. Water vascular system fluid samples were collected from 4 sea stars and all controls at predetermined times during and after enrofloxacin administration. The enrofloxacin concentration in those samples was determined by high-performance liquid chromatography. For each phase, noncompartmental analysis of naïve averaged pooled samples was used to obtain initial parameter estimates; then, population pharmacokinetic analysis was performed that accounted for the sparse sampling technique used. RESULTS Injection phase data were best fit with a 2-compartment model; elimination half-life, peak concentration, area under the curve, and volume of distribution were 42.8 hours, 18.9 μg/mL, 353.8 μg•h/mL, and 0.25 L/kg, respectively. Immersion phase data were best fit with a 1-compartment model; elimination half-life, peak concentration, and area under the curve were 56 hours, 36.3 μg•h/mL, and 0.39 μg/mL, respectively. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that the described enrofloxacin administration resulted in water vascular system fluid drug concentrations expected to exceed the minimum inhibitory concentration for many bacterial pathogens.
On the mass of the compact object in the black hole binary A0620-00
NASA Technical Reports Server (NTRS)
Haswell, Carole A.; Robinson, Edward L.; Horne, Keith; Stiening, Rae F.; Abbott, Timothy M. C.
1993-01-01
Multicolor orbital light curves of the black hole candidate binary A0620-00 are presented. The light curves exhibit ellipsoidal variations and a grazing eclipse of the mass donor companion star by the accretion disk. Synthetic light curves were generated using realistic mass donor star fluxes and an isothermal blackbody disk. For mass ratios of q = M sub 1/M sub 2 = 5.0, 10.6, and 15.0 systematic searches were executed in parameter space for synthetic light curves that fit the observations. For each mass ratio, acceptable fits were found only for a small range of orbital inclinations. It is argued that the mass ratio is unlikely to exceed q = 10.6, and an upper limit of 0.8 solar masses is placed on the mass of the companion star. These constraints imply 4.16 +/- 0.1 to 5.55 +/- 0.15 solar masses. The lower limit on M sub 1 is more than 4-sigma above the mass of a maximally rotating neutron star, and constitutes further strong evidence in favor of a black hole primary in this system.
Comparison of three methods for wind turbine capacity factor estimation.
Ditkovich, Y; Kuperman, A
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.
Understanding and Taking Control of Surgical Learning Curves.
Gofton, Wade T; Papp, Steven R; Gofton, Tyson; Beaulé, Paul E
2016-01-01
As surgical techniques continue to evolve, surgeons will have to integrate new skills into their practice. A learning curve is associated with the integration of any new procedure; therefore, it is important for surgeons who are incorporating a new technique into their practice to understand what the reported learning curve might mean for them and their patients. A learning curve should not be perceived as negative because it can indicate progress; however, surgeons need to understand how to optimize the learning curve to ensure progress with minimal patient risk. It is essential for surgeons who are implementing new procedures or skills to define potential learning curves, examine how a reported learning curve may relate to an individual surgeon's in-practice learning and performance, and suggest methods in which an individual surgeon can modify his or her specific learning curve in order to optimize surgical outcomes and patient safety. A defined personal learning contract may be a practical method for surgeons to proactively manage their individual learning curve and provide evidence of their efforts to safely improve surgical practice.
Scaling laws for light-weight optics
NASA Technical Reports Server (NTRS)
Valente, Tina M.
1990-01-01
Scaling laws for light-weight optical systems are examined. A cubic relationship between mirror diameter and weight has been suggested and used by many designers of optical systems as the best description for all light-weight mirrors. A survey of existing light-weight systems in the open literature has been made to clarify this issue. Fifty existing optical systems were surveyed with all varieties of light-weight mirrors including glass and beryllium structured mirrors, contoured mirrors, and very thin solid mirrors. These mirrors were then categorized and weight to diameter ratio was plotted to find a best fit curve for each case. A best fitting curve program tests nineteen different equations and ranks a 'goodness of fit' for each of these equations. The resulting relationship found for each light-weight mirror category helps to quantify light-weight optical systems and methods of fabrication and provides comparisons between mirror types.
Barnard, M.; Venter, C.; Harding, A. K.
2018-01-01
We performed geometric pulsar light curve modeling using static, retarded vacuum, and offset polar cap (PC) dipole B-fields (the latter is characterized by a parameter ε), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole B-field mimics deviations from the static dipole (which corresponds to ε = 0). In addition to constant-emissivity geometric models, we also considered a slot gap (SG) E-field associated with the offset-PC dipole B-field and found that its inclusion leads to qualitatively different light curves. Solving the particle transport equation shows that the particle energy only becomes large enough to yield significant curvature radiation at large altitudes above the stellar surface, given this relatively low E-field. Therefore, particles do not always attain the radiation-reaction limit. Our overall optimal light curve fit is for the retarded vacuum dipole field and OG model, at an inclination angle α=78−1+1° and observer angle ζ=69−1+2°. For this B-field, the TPC model is statistically disfavored compared to the OG model. For the static dipole field, neither model is significantly preferred. We found that smaller values of ε are favored for the offset-PC dipole field when assuming constant emissivity, and larger ε values favored for variable emissivity, but not significantly so. When multiplying the SG E-field by a factor of 100, we found improved light curve fits, with α and ζ being closer to best fits from independent studies, as well as curvature radiation reaction at lower altitudes. PMID:29681648
Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions.
Joshi, Gaurav V; Duan, Yuanyuan; Della Bona, Alvaro; Hill, Thomas J; St John, Kenneth; Griggs, Jason A
2013-11-01
To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPam(1/2) reaching a plateau at different critical flaw sizes based on loading method. Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions
Joshi, Gaurav V.; Duan, Yuanyuan; Bona, Alvaro Della; Hill, Thomas J.; John, Kenneth St.; Griggs, Jason A.
2013-01-01
Objectives To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Materials and Methods Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. Results There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPa·m1/2 reaching a plateau at different critical flaw sizes based on loading method. Significance Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. PMID:24034441
NASA Technical Reports Server (NTRS)
Barnard, M.; Venter, C.; Harding, A. K.
2016-01-01
We performed geometric pulsar light curve modeling using static, retarded vacuum, and offset polar cap (PC) dipole B-fields (the latter is characterized by a parameter epsilon), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole B-field mimics deviations from the static dipole (which corresponds to epsilon equals 0). In addition to constant-emissivity geometric models, we also considered a slot gap (SG) E-field associated with the offset-PC dipole B-field and found that its inclusion leads to qualitatively different light curves. Solving the particle transport equation shows that the particle energy only becomes large enough to yield significant curvature radiation at large altitudes above the stellar surface, given this relatively low E-field. Therefore, particles do not always attain the radiation-reaction limit. Our overall optimal light curve fit is for the retarded vacuum dipole field and OG model, at an inclination angle alpha equals 78 plus or minus 1 degree and observer angle zeta equals 69 plus 2 degrees or minus 1 degree. For this B-field, the TPC model is statistically disfavored compared to the OG model. For the static dipole field, neither model is significantly preferred. We found that smaller values of epsilon are favored for the offset-PC dipole field when assuming constant emissivity, and larger epsilon values favored for variable emissivity, but not significantly so. When multiplying the SG E-field by a factor of 100, we found improved light curve fits, with alpha and zeta being closer to best fits from independent studies, as well as curvature radiation reaction at lower altitudes.
Time series modeling and forecasting using memetic algorithms for regime-switching models.
Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel
2012-11-01
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
Dynamic Testing of Laterally Confined Concrete
1990-09-01
for Intermediate Confining pressure (Dashed Curve). 31 23. Example of Regression Fit by Equation (6) for Highest Pressure Group (Dashed Curve... pressure group , loaded by a moderate striker-bar impact speed of 420 in/sec. (10.7 m/s). The peak stress of 124 MPa (18 ksi) occurs at a strain of...survived at one end. This was for the highest speed impact in the lowest confining pressure group . Curves are given in the Appendix Figure A-15. The
Asaad, Celia O; Caraos, Gloriamaris L; Robles, Gerardo Jose M; Asa, Anie Day D C; Cobar, Maria Lucia C; Asaad, Al-Ahmadgaid
2016-01-01
The utility of a biological dosimeter based on the analysis of dicentrics is invaluable in the event of a radiological emergency wherein the estimated absorbed dose of an exposed individual is crucial in the proper medical management of patients. The technique is also used for routine monitoring of occupationally exposed workers to determine radiation exposure. An in vitro irradiation study of human peripheral blood lymphocytes was conducted to establish a dose-response curve for radiation-induced dicentric aberrations. Blood samples were collected from volunteer donors and together with optically stimulated luminescence (OSL) dosimeters and were irradiated at 0, 0.1, 0.25, 0.5, 0.75, 1, 2, 4, and 6 Gy using a cobalt-60 radiotherapy unit. Blood samples were cultured for 48 h, and the metaphase chromosomes were prepared following the procedure of the International Atomic Energy Agency's Emergency Preparedness and Response - Biodosimetry 2011 manual. At least 100 metaphases were scored for dicentric aberrations at each dose point. The data were analyzed using R language program. The results indicated that the distribution of dicentric cells followed a Poisson distribution and the dose-response curve was established using the estimated model, Y dic = 0.0003 (±0.0003) +0.0336 (±0.0115) × D + 0.0236 (±0.0054) × D 2 . In this study, the reliability of the dose-response curve in estimating the absorbed dose was also validated for 2 and 4 Gy using OSL dosimeters. The data were fitted into the constructed curve. The result of the validation study showed that the obtained estimate for the absorbed exposure doses was close to the true exposure doses.
De Gori, Marco; Adamczewski, Benjamin; Jenny, Jean-Yves
2017-06-01
The purpose of the study was to use the cumulative summation (CUSUM) test to assess the learning curve during the introduction of a new surgical technique (patient-specific instrumentation) in total knee arthroplasty (TKA) in an academic department. The first 50TKAs operated on at an academic department using patient-specific templates (PSTs) were scheduled to enter the study. All patients had a preoperative computed tomography scan evaluation to plan bone resections. The PSTs were positioned intraoperatively according to the best-fit technique and their three-dimensional orientation was recorded by a navigation system. The position of the femur and tibia PST was compared to the planned position for four items for each component: coronal and sagittal orientation, medial and lateral height of resection. Items were summarized to obtain knee, femur and tibia PST scores, respectively. These scores were plotted according to chronological order and included in a CUSUM analysis. The tested hypothesis was that the PST process for TKA was immediately under control after its introduction. CUSUM test showed that positioning of the PST significantly differed from the target throughout the study. There was a significant difference between all scores and the maximal score. No case obtained the maximal score of eight points. The study was interrupted after 20 cases because of this negative evaluation. The CUSUM test is effective in monitoring the learning curve when introducing a new surgical procedure. Introducing PST for TKA in an academic department may be associated with a long-lasting learning curve. The study was registered on the clinical.gov website (Identifier NCT02429245). Copyright © 2017 Elsevier B.V. All rights reserved.
Lee, Dae-Woo; Jung, Ji-Eun; Yang, Yeon-Mi; Kim, Jae-Gon; Yi, Ho-Keun; Jeon, Jae-Gyu
2016-10-01
The aim of this study was to determine the pattern of the antibacterial activity of chlorhexidine digluconate (CHX) against mature Streptococcus mutans biofilms. Streptococcus mutans biofilms were formed on saliva-coated hydroxyapatite discs and then treated with 0-20% CHX, once, three times, or five times (1 min per treatment) during the period of mature biofilm formation (beyond 46 h). After the treatments, the colony-forming unit (CFU) counts of the treated biofilms were determined. The pH values of the spent culture medium were also determined to investigate the change in pH resulting from the antibacterial activity of CHX. The relationships between the concentration of CHX and the CFU counts and the concentration of CHX and culture medium pH, relative to the number of treatments performed, were evaluated using a sigmoidal curve-fitting procedure. The changes in CFU counts and culture medium pH followed sigmoidal curves and were dependent on the concentration of CHX (R 2 = 0.99). The sigmoidal curves were left-shifted with increasing number of treatments. Furthermore, the culture-medium pH of the treated biofilms increased as their CFU counts decreased. The lowest CHX concentration to increase culture-medium pH above the critical pH also decreased as the number of treatments increased. These results may provide fundamental information for selecting the appropriate CHX concentrations to treat S. mutans biofilms. © 2016 Eur J Oral Sci.
Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.
2006-01-01
The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashiwa, B. A.
2010-12-01
Abstract A thermodynamically consistent and fully general equation–of– state (EOS) for multifield applications is described. EOS functions are derived from a Helmholtz free energy expressed as the sum of thermal (fluctuational) and collisional (condensed–phase) contributions; thus the free energy is of the Mie–Gr¨uneisen1 form. The phase–coexistence region is defined using a parameterized saturation curve by extending the form introduced by Guggenheim,2 which scales the curve relative to conditions at the critical point. We use the zero–temperature condensed–phase contribution developed by Barnes,3 which extends the Thomas–Fermi–Dirac equation to zero pressure. Thus, the functional form of the EOS could be called MGGBmore » (for Mie– Gr¨uneisen–Guggenheim–Barnes). Substance–specific parameters are obtained by fitting the low–density energy to data from the Sesame4 library; fitting the zero–temperature pressure to the Sesame cold curve; and fitting the saturation curve and latent heat to laboratory data,5 if available. When suitable coexistence data, or Sesame data, are not available, then we apply the Principle of Corresponding States.2 Thus MGGB can be thought of as a numerical recipe for rendering the tabular Sesame EOS data in an analytic form that includes a proper coexistence region, and which permits the accurate calculation of derivatives associated with compressibility, expansivity, Joule coefficient, and specific heat, all of which are required for multifield applications. 1« less
Predicting long-term graft survival in adult kidney transplant recipients.
Pinsky, Brett W; Lentine, Krista L; Ercole, Patrick R; Salvalaggio, Paolo R; Burroughs, Thomas E; Schnitzler, Mark A
2012-07-01
The ability to accurately predict a population's long-term survival has important implications for quantifying the benefits of transplantation. To identify a model that can accurately predict a kidney transplant population's long-term graft survival, we retrospectively studied the United Network of Organ Sharing data from 13,111 kidney-only transplants completed in 1988- 1989. Nineteen-year death-censored graft survival (DCGS) projections were calculated and compared with the population's actual graft survival. The projection curves were created using a two-part estimation model that (1) fits a Kaplan-Meier survival curve immediately after transplant (Part A) and (2) uses truncated observational data to model a survival function for long-term projection (Part B). Projection curves were examined using varying amounts of time to fit both parts of the model. The accuracy of the projection curve was determined by examining whether predicted survival fell within the 95% confidence interval for the 19-year Kaplan-Meier survival, and the sample size needed to detect the difference in projected versus observed survival in a clinical trial. The 19-year DCGS was 40.7% (39.8-41.6%). Excellent predictability (41.3%) can be achieved when Part A is fit for three years and Part B is projected using two additional years of data. Using less than five total years of data tended to overestimate the population's long-term survival, accurate prediction of long-term DCGS is possible, but requires attention to the quantity data used in the projection method.
Büchi, Dominik L; Ebler, Sabine; Hämmerle, Christoph H F; Sailer, Irena
2014-01-01
To test whether or not different types of CAD/CAM systems, processing zirconia in the densely and in the pre-sintered stage, lead to differences in the accuracy of 4-unit anterior fixed dental prosthesis (FDP) frameworks, and to evaluate the efficiency. 40 curved anterior 4-unit FDP frameworks were manufactured with four different CAD/CAM systems: DCS Precident (DCS) (control group), Cercon (DeguDent) (test group 1), Cerec InLab (Sirona) (test group 2), Kavo Everest (Kavo) (test group 3). The DCS System was chosen as the control group because the zirconia frameworks are processed in its densely sintered stage and there is no shrinkage of the zirconia during the manufacturing process. The initial fit of the frameworks was checked and adjusted to a subjectively similar level of accuracy by one dental technician, and the time taken for this was recorded. After cementation, the frameworks were embedded into resin and the abutment teeth were cut in mesiodistal and orobuccal directions in four specimens. The thickness of the cement gap was measured at 50× (internal adaptation) and 200× (marginal adaptation) magnification. The measurement of the accuracy was performed at four sites. Site 1: marginal adaptation, the marginal opening at the point of closest perpendicular approximation between the die and framework margin. Site 2: Internal adaptation at the chamfer. Site 3: Internal adaptation at the axial wall. Site 4: Internal adaptation in the occlusal area. The data were analyzed descriptively using the ANOVA and Bonferroni/ Dunn tests. The mean marginal adaptation (site 1) of the control group was 107 ± 26 μm; test group 1, 140 ± 26 μm; test group 2, 104 ± 40 μm; and test group 3, 95 ± 31 μm. Test group 1 showed a tendency to exhibit larger marginal gaps than the other groups, however, this difference was only significant when test groups 1 and 3 were compared (P = .0022; Bonferroni/Dunn test). Significantly more time was needed for the adjustment of the frameworks of test group 1 compared to the other test groups and the control group (21.1 min vs 3.8 min) (P < .0001; Bonferroni/Dunn test). For the adjustment of the frameworks of test groups 2 and 3, the same time was needed as for the frameworks of the control group. No differences of the framework accuracy resulting from the different CAM and CAD/CAM procedures were found; however, only after adjustment of the fit by an experienced dental technician. Hence, the influence of a manual correction of the fit was crucial, and the efforts differed for the tested systems. The CAM system led to lower initial accuracy of the frameworks than the CAD/CAM systems, which may be crucial for the dental laboratory. The stage of the zirconia materials used for the different CAD/CAM procedures, ie presintered or densely sintered, exhibited no influence.
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebato, Yuki; Miyata, Tatsuhiko, E-mail: miyata.tatsuhiko.mf@ehime-u.ac.jp
Ornstein-Zernike (OZ) integral equation theory is known to overestimate the excess internal energy, U{sup ex}, pressure through the virial route, P{sub v}, and excess chemical potential, μ{sup ex}, for one-component Lennard-Jones (LJ) fluids under hypernetted chain (HNC) and Kovalenko-Hirata (KH) approximatons. As one of the bridge correction methods to improve the precision of these thermodynamic quantities, it was shown in our previous paper that the method to apparently adjust σ parameter in the LJ potential is effective [T. Miyata and Y. Ebato, J. Molec. Liquids. 217, 75 (2016)]. In our previous paper, we evaluated the actual variation in the σmore » parameter by using a fitting procedure to molecular dynamics (MD) results. In this article, we propose an alternative method to determine the actual variation in the σ parameter. The proposed method utilizes a condition that the virial and compressibility pressures coincide with each other. This method can correct OZ theory without a fitting procedure to MD results, and possesses characteristics of keeping a form of HNC and/or KH closure. We calculate the radial distribution function, pressure, excess internal energy, and excess chemical potential for one-component LJ fluids to check the performance of our proposed bridge function. We discuss the precision of these thermodynamic quantities by comparing with MD results. In addition, we also calculate a corrected gas-liquid coexistence curve based on a corrected KH-type closure and compare it with MD results.« less
Development and Assessment of a New Empirical Model for Predicting Full Creep Curves
Gray, Veronica; Whittaker, Mark
2015-01-01
This paper details the development and assessment of a new empirical creep model that belongs to the limited ranks of models reproducing full creep curves. The important features of the model are that it is fully standardised and is universally applicable. By standardising, the user no longer chooses functions but rather fits one set of constants only. Testing it on 7 contrasting materials, reproducing 181 creep curves we demonstrate its universality. New model and Theta Projection curves are compared to one another using an assessment tool developed within this paper. PMID:28793458
Training, Simulation, the Learning Curve, and How to Reduce Complications in Urology.
Brunckhorst, Oliver; Volpe, Alessandro; van der Poel, Henk; Mottrie, Alexander; Ahmed, Kamran
2016-04-01
Urology is at the forefront of minimally invasive surgery to a great extent. These procedures produce additional learning challenges and possess a steep initial learning curve. Training and assessment methods in surgical specialties such as urology are known to lack clear structure and often rely on differing operative flow experienced by individuals and institutions. This article aims to assess current urology training modalities, to identify the role of simulation within urology, to define and identify the learning curves for various urologic procedures, and to discuss ways to decrease complications in the context of training. A narrative review of the literature was conducted through December 2015 using the PubMed/Medline, Embase, and Cochrane Library databases. Evidence of the validity of training methods in urology includes observation of a procedure, mentorship and fellowship, e-learning, and simulation-based training. Learning curves for various urologic procedures have been recommended based on the available literature. The importance of structured training pathways is highlighted, with integration of modular training to ensure patient safety. Valid training pathways are available in urology. The aim in urology training should be to combine all of the available evidence to produce procedure-specific curricula that utilise the vast array of training methods available to ensure that we continue to improve patient outcomes and reduce complications. The current evidence for different training methods available in urology, including simulation-based training, was reviewed, and the learning curves for various urologic procedures were critically analysed. Based on the evidence, future pathways for urology curricula have been suggested to ensure that patient safety is improved. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya
2016-01-01
Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F
In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiang; Sokolov, Mikhail A; Nanstad, Randy K
Material fracture toughness in the fully ductile region can be described by a J-integral vs. crack growth resistance curve (J-R curve). As a conventional J-R curve measurement method, the elastic unloading compliance (EUC) method becomes impractical for elevated temperature testing due to relaxation of the material and friction induced back-up shape of the J-R curve. One alternative solution of J-R curve testing applies the Direct Current Potential Drop (DCPD) technique for measuring crack extension. However, besides crack growth, potential drop can also be influenced by plastic deformation, crack tip blunting, etc., and uncertainties exist in the current DCPD methodology especiallymore » in differentiating potential drop due to stable crack growth and due to material deformation. Thus, using DCPD for J-R curve determination remains a challenging task. In this study, a new adjustment procedure for applying DCPD to derive the J-R curve has been developed for conventional fracture toughness specimens, including compact tension, three-point bend, and disk-shaped compact specimens. Data analysis has been performed on Oak Ridge National Laboratory (ORNL) and American Society for Testing and Materials (ASTM) interlaboratory results covering different specimen thicknesses, test temperatures, and materials, to evaluate the applicability of the new DCPD adjustment procedure for J-R curve characterization. After applying the newly-developed procedure, direct comparison between the DCPD method and the normalization method on the same specimens indicated close agreement for the overall J-R curves, as well as the provisional values of fracture toughness near the onset of ductile crack extension, Jq, and of tearing modulus.« less
Dai, Cong; Jiang, Min; Sun, Ming-Jun; Cao, Qin
2018-05-01
Fecal immunochemical test (FIT) is a promising marker for assessment of inflammatory bowel disease activity. However, the utility of FIT for predicting mucosal healing (MH) of ulcerative colitis (UC) patients has yet to be clearly demonstrated. The objective of our study was to perform a diagnostic test accuracy test meta-analysis evaluating the diagnostic accuracy of FIT in predicting MH of UC patients. We systematically searched the databases from inception to November 2017 that evaluated MH in UC. The methodological quality of each study was assessed according to the Quality Assessment of Diagnostic Accuracy Studies checklist. The extracted data were pooled using a summary receiver operating characteristic curve model. Random-effects model was used to summarize the diagnostic odds ratio, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Six studies comprising 625 UC patients were included in the meta-analysis. The pooled sensitivity and specificity values for predicting MH in UC were 0.77 (95% confidence interval [CI], 0.72-0.81) and 0.81 (95% CI, 0.76-0.85), respectively. The FIT level had a high rule-in value (positive likelihood ratio, 3.79; 95% CI, 2.85-5.03) and a moderate rule-out value (negative likelihood ratio, 0.26; 95% CI, 0.16-0.43) for predicting MH in UC. The results of the receiver operating characteristic curve analysis (area under the curve, 0.88; standard error of the mean, 0.02) and diagnostic odds ratio (18.08; 95% CI, 9.57-34.13) also revealed improved discrimination for identifying MH in UC with FIT concentration. Our meta-analysis has found that FIT is a simple, reliable non-invasive marker for predicting MH in UC patients. © 2018 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Gaudion, Sarah L; Doma, Kenji; Sinclair, Wade; Banyard, Harry G; Woods, Carl T
2017-07-01
Gaudion, SL, Doma, K, Sinclair, W, Banyard, HG, and Woods, CT. Identifying the physical fitness, anthropometric and athletic movement qualities discriminant of developmental level in elite junior Australian football: implications for the development of talent. J Strength Cond Res 31(7): 1830-1839, 2017-This study aimed to identify the physical fitness, anthropometric and athletic movement qualities discriminant of developmental level in elite junior Australian football (AF). From a total of 77 players, 2 groups were defined according to their developmental level; under 16 (U16) (n = 40, 15.6 to 15.9 years), and U18 (n = 37, 17.1 to 17.9 years). Players performed a test battery consisting of 7 physical fitness assessments, 2 anthropometric measurements, and a fundamental athletic movement assessment. A multivariate analysis of variance tested the main effect of developmental level (2 levels: U16 and U18) on the assessment criterions, whilst binary logistic regression models and receiver operating characteristic (ROC) curves were built to identify the qualities most discriminant of developmental level. A significant effect of developmental level was evident on 9 of the assessments (d = 0.27-0.88; p ≤ 0.05). However, it was a combination of body mass, dynamic vertical jump height (nondominant leg), repeat sprint time, and the score on the 20-m multistage fitness test that provided the greatest association with developmental level (Akaike's information criterion = 80.84). The ROC curve was maximized with a combined score of 180.7, successfully discriminating 89 and 60% of the U18 and U16 players, respectively (area under the curve = 79.3%). These results indicate that there are distinctive physical fitness and anthropometric qualities discriminant of developmental level within the junior AF talent pathway. Coaches should consider these differences when designing training interventions at the U16 level to assist with the development of prospective U18 AF players.
TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Shi, J; Yang, Y
Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less
ERIC Educational Resources Information Center
Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.
In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…
BOX-COUNTING DIMENSION COMPUTED BY α-DENSE CURVES
NASA Astrophysics Data System (ADS)
García, G.; Mora, G.; Redtwitz, D. A.
We introduce a method to reduce to the real case the calculus of the box-counting dimension of subsets of the unit cube In, n > 1. The procedure is based on the existence of special types of α-dense curves (a generalization of the space-filling curves) in In called δ-uniform curves.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-16
... Under Secretary of Defense for Acquisition, Technology, & Logistics (USD(AT&L)), dated November 3, 2010... cost, share lines, and ceiling price. This regulation is not a ``one-size- fits-all'' mandate. However.../optimistic weighted average and ensure that their cost curves do not mirror cost-plus-fixed-fee cost curves...
Comparative Evaluation of Two Serial Gene Expression Experiments | Division of Cancer Prevention
Stuart G. Baker, 2014 Introduction This program fits biologically relevant response curves in comparative analysis of the two gene expression experiments involving same genes but under different scenarios and at least 12 responses. The program outputs gene pairs with biologically relevant response curve shapes including flat, linear, sigmoid, hockey stick, impulse and step
ERIC Educational Resources Information Center
Chien, Yu-Yi Grace
2016-01-01
The research described in this article concludes that the widely cited U-curve hypothesis is no longer supported by research data because the adjustment of international postgraduate students is a complex phenomenon that does not fit easily with attempts to define and categorize it. Methodological issues, different internal and external factors,…
Multivariate Epi-splines and Evolving Function Identification Problems
2015-04-15
such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction
Fixture For Drilling And Tapping A Curved Workpiece
NASA Technical Reports Server (NTRS)
Espinosa, P. S.; Lockyer, R. T.
1992-01-01
Simple fixture guides drilling and tapping of holes in prescribed locations and orientations on workpiece having curved surface. Tool conceived for use in reworking complexly curved helicopter blades made of composite materials. Fixture is block of rigid foam with epoxy filler, custom-fitted to surface contour, containing bushings and sleeves at drilling and tapping sites. Bushings changed, so taps and drills of various sizes accommodated. In use, fixture secured to surface by hold-down bolts extending through sleeves and into threads in substrate.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
Method for Making Measurements of the Post-Combustion Residence Time in a Gas Turbine Engine
NASA Technical Reports Server (NTRS)
Miles, Jeffrey H (Inventor)
2015-01-01
A system and method of measuring a residence time in a gas-turbine engine is provided, whereby the method includes placing pressure sensors at a combustor entrance and at a turbine exit of the gas-turbine engine and measuring a combustor pressure at the combustor entrance and a turbine exit pressure at the turbine exit. The method further includes computing cross-spectrum functions between a combustor pressure sensor signal from the measured combustor pressure and a turbine exit pressure sensor signal from the measured turbine exit pressure, applying a linear curve fit to the cross-spectrum functions, and computing a post-combustion residence time from the linear curve fit.
Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.
Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui
2018-01-13
Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
Jirapinyo, Pichamol; Abidi, Wasif M; Aihara, Hiroyuki; Zaki, Theodore; Tsay, Cynthia; Imaeda, Avlin B; Thompson, Christopher C
2017-10-01
Preclinical simulator training has the potential to decrease endoscopic procedure time and patient discomfort. This study aims to characterize the learning curve of endoscopic novices in a part-task simulator and propose a threshold score for advancement to initial clinical cases. Twenty novices with no prior endoscopic experience underwent repeated endoscopic simulator sessions using the part-task simulator. Simulator scores were collected; their inverse was averaged and fit to an exponential curve. The incremental improvement after each session was calculated. Plateau was defined as the session after which incremental improvement in simulator score model was less than 5%. Additionally, all participants filled out questionnaires regarding simulator experience after sessions 1, 5, 10, 15, and 20. A visual analog scale and NASA task load index were used to assess levels of comfort and demand. Twenty novices underwent 400 simulator sessions. Mean simulator scores at sessions 1, 5, 10, 15, and 20 were 78.5 ± 5.95, 176.5 ± 17.7, 275.55 ± 23.56, 347 ± 26.49, and 441.11 ± 38.14. The best fit exponential model was [time/score] = 26.1 × [session #] -0.615 ; r 2 = 0.99. This corresponded to an incremental improvement in score of 35% after the first session, 22% after the second, 16% after the third and so on. Incremental improvement dropped below 5% after the 12th session corresponding to the predicted score of 265. Simulator training was related to higher comfort maneuvering an endoscope and increased readiness for supervised clinical endoscopy, both plateauing between sessions 10 and 15. Mental demand, physical demand, and frustration levels decreased with increased simulator training. Preclinical training using an endoscopic part-task simulator appears to increase comfort level and decrease mental and physical demand associated with endoscopy. Based on a rigorous model, we recommend that novices complete a minimum of 12 training sessions and obtain a simulator score of at least 265 to be best prepared for clinical endoscopy.
Guidelines for using the Delphi Technique to develop habitat suitability index curves
Crance, Johnie H.
1987-01-01
Habitat Suitability Index (SI) curves are one method of presenting species habitat suitability criteria. The curves are often used with the Habitat Evaluation Procedures (HEP) and are necessary components of the Instream Flow Incremental Methodology (IFIM) (Armour et al. 1984). Bovee (1986) described three categories of SI curves or habitat suitability criteria based on the procedures and data used to develop the criteria. Category I curves are based on professional judgment, with 1ittle or no empirical data. Both Category II (utilization criteria) and Category III (preference criteria) curves have as their source data collected at locations where target species are observed or collected. Having Category II and Category III curves for all species of concern would be ideal. In reality, no SI curves are available for many species, and SI curves that require intensive field sampling often cannot be developed under prevailing constraints on time and costs. One alternative under these circumstances is the development and interim use of SI curves based on expert opinion. The Delphi technique (Pill 1971; Delbecq et al. 1975; Linstone and Turoff 1975) is one method used for combining the knowledge and opinions of a group of experts. The purpose of this report is to describe how the Delphi technique may be used to develop expert-opinion-based SI curves.
Automated reconstruction of rainfall events responsible for shallow landslides
NASA Astrophysics Data System (ADS)
Vessia, G.; Parise, M.; Brunetti, M. T.; Peruccacci, S.; Rossi, M.; Vennari, C.; Guzzetti, F.
2014-04-01
Over the last 40 years, many contributions have been devoted to identifying the empirical rainfall thresholds (e.g. intensity vs. duration ID, cumulated rainfall vs. duration ED, cumulated rainfall vs. intensity EI) for the initiation of shallow landslides, based on local as well as worldwide inventories. Although different methods to trace the threshold curves have been proposed and discussed in literature, a systematic study to develop an automated procedure to select the rainfall event responsible for the landslide occurrence has rarely been addressed. Nonetheless, objective criteria for estimating the rainfall responsible for the landslide occurrence (effective rainfall) play a prominent role on the threshold values. In this paper, two criteria for the identification of the effective rainfall events are presented: (1) the first is based on the analysis of the time series of rainfall mean intensity values over one month preceding the landslide occurrence, and (2) the second on the analysis of the trend in the time function of the cumulated mean intensity series calculated from the rainfall records measured through rain gauges. The two criteria have been implemented in an automated procedure written in R language. A sample of 100 shallow landslides collected in Italy by the CNR-IRPI research group from 2002 to 2012 has been used to calibrate the proposed procedure. The cumulated rainfall E and duration D of rainfall events that triggered the documented landslides are calculated through the new procedure and are fitted with power law in the (D,E) diagram. The results are discussed by comparing the (D,E) pairs calculated by the automated procedure and the ones by the expert method.
Light-curve modelling constraints on the obliquities and aspect angles of the young Fermi pulsars
NASA Astrophysics Data System (ADS)
Pierbattista, M.; Harding, A. K.; Grenier, I. A.; Johnson, T. J.; Caraveo, P. A.; Kerr, M.; Gonthier, P. L.
2015-03-01
In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed γ-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity α and of the line of sight angle ζ, yielding estimates of the radiation beaming factor and radiated luminosity. Using different γ-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit γ-ray light curves for 76 young or middle-aged pulsars and we jointly fit their γ-ray plus radio light curves when possible. We find that a joint radio plus γ-ray fit strategy is important to obtain (α,ζ) estimates that can explain simultaneously detectable radio and γ-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (α,ζ) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the γ-ray only fit leads to underestimated α or ζ when the solution is found to the left or to the right of the main α-ζ plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favoured in explaining the observations. We find no apparent evolution of α on a time scale of 106 years. For all emission geometries our derived γ-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from radio-quiet to radio-loud solutions. For all models, the correlation between γ-ray luminosity and spin-down power is consistent with a square root dependence. The γ-ray luminosities obtained by using the beaming factors estimated in the framework of each model do not exceed the spin-down power. This suggests that assuming a beaming factor of one for all objects, as done in other studies, likely overestimates the real values. The data show a relation between the pulsar spectral characteristics and the width of the accelerator gap. The relation obtained in the case of the Slot Gap model is consistent with the theoretical prediction. Appendices are available in electronic form at http://www.aanda.org
Light-curve modelling constraints on the obliquities and aspect angles of the young Fermi pulsars
Pierbattista, M.; Harding, A. K.; Grenier, I. A.; ...
2015-02-10
In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed γ-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity α and of the line of sight angle ζ, yielding estimates of the radiation beaming factor and radiated luminosity. Using different γ-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit γ-ray light curves formore » 76 young or middle-aged pulsars and we jointly fit their γ-ray plus radio light curves when possible. We find that a joint radio plus γ-ray fit strategy is important to obtain (α,ζ) estimates that can explain simultaneously detectable radio and γ-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (α,ζ) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the γ-ray only fit leads to underestimated α or ζ when the solution is found to the left or to the right of the main α-ζ plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favoured in explaining the observations. We find no apparent evolution of α on a time scale of 106 years. For all emission geometries our derived γ-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from radio-quiet to radio-loud solutions. For all models, the correlation between γ-ray luminosity and spin-down power is consistent with a square root dependence. The γ-ray luminosities obtained by using the beaming factors estimated in the framework of each model do not exceed the spin-down power. This suggests that assuming a beaming factor of one for all objects, as done in other studies, likely overestimates the real values. The data show a relation between the pulsar spectral characteristics and the width of the accelerator gap. Furthermore, the relation obtained in the case of the Slot Gap model is consistent with the theoretical prediction.« less
Light-Curve Modelling Constraints on the Obliquities and Aspect Angles of the Young Fermi Pulsars
NASA Technical Reports Server (NTRS)
Pierbattista, M.; Harding, A. K.; Grenier, I. A.; Johnson, T. J.; Caraveo, P. A.; Kerr, M.; Gonthier, P. L.
2015-01-01
In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed gamma-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity alpha and of the line of sight angle zeta, yielding estimates of the radiation beaming factor and radiated luminosity. Using different gamma-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit gamma-ray light curves for 76 young or middle-aged pulsars and we jointly fit their gamma-ray plus radio light curves when possible. We find that a joint radio plus gamma-ray fit strategy is important to obtain (alpha, zeta) estimates that can explain simultaneously detectable radio and gamma-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (alpha, gamma) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the gamma-ray only fit leads to underestimated alpha or zeta when the solution is found to the left or to the right of the main alpha-zeta plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favored in explaining the observations. We find no apparent evolution of a on a time scale of 106 years. For all emission geometries our derived gamma-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from radio-quiet to radio-loud solutions. For all models, the correlation between gamma-ray luminosity and spin-down power is consistent with a square root dependence. The gamma-ray luminosities obtained by using the beaming factors estimated in the framework of each model do not exceed the spin-down power. This suggests that assuming a beaming factor of one for all objects, as done in other studies, likely overestimates the real values. The data show a relation between the pulsar spectral characteristics and the width of the accelerator gap. The relation obtained in the case of the Slot Gap model is consistent with the theoretical prediction.
Procedure for curve warning signing, delineation, and advisory speeds for horizontal curves.
DOT National Transportation Integrated Search
2010-09-30
Horizontal curves are relatively dangerous features, with collision rates at least 1.5 times that of comparable tangent : sections on average. To help make these segments safer, this research developed consistent study methods with : which field pers...
Roberts, Chris; Zoanetti, Nathan; Rothnie, Imogene
2009-04-01
The multiple mini-interview (MMI) was initially designed to test non-cognitive characteristics related to professionalism in entry-level students. However, it may be testing cognitive reasoning skills. Candidates to medical and dental schools come from diverse backgrounds and it is important for the validity and fairness of the MMI that these background factors do not impact on their scores. A suite of advanced psychometric techniques drawn from item response theory (IRT) was used to validate an MMI question bank in order to establish the conceptual equivalence of the questions. Bias against candidate subgroups of equal ability was investigated using differential item functioning (DIF) analysis. All 39 questions had a good fit to the IRT model. Of the 195 checklist items, none were found to have significant DIF after visual inspection of expected score curves, consideration of the number of applicants per category, and evaluation of the magnitude of the DIF parameter estimates. The question bank contains items that have been studied carefully in terms of model fit and DIF. Questions appear to measure a cognitive unidimensional construct, 'entry-level reasoning skills in professionalism', as suggested by goodness-of-fit statistics. The lack of items exhibiting DIF is encouraging in a contemporary high-stakes admission setting where candidates of diverse personal, cultural and academic backgrounds are assessed by common means. This IRT approach has potential to provide assessment designers with a quality control procedure that extends to the level of checklist items.
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2009-06-01
Spectroscopists have long attempted to summarize what they know about small molecules in terms of a knowledge of potential energy curves or surfaces. For most of the past century, this involved deducing polynomial-expansion force-field coefficients from energy level expressions fitted to experimental data, or for diatomic molecules, by generating tables of many-digit RKR turning points from such expressions. In recent years, however, it has become increasingly common either to use high-level ab initio calculations to compute the desired potentials, or to determine parametrized global analytic potential functions from direct fits to spectroscopic data. In the former case, this invoked a need for robust, flexible, compact, and `portable' analytic potentials for summarizing the information contained in the (sometimes very large numbers of) ab initio points, and making them `user friendly'. In the latter case, the same properties are required for potentials used in the least-squares fitting procedure. In both cases, there is also a cardinal need for potential function forms that extrapolate sensibly, beyond the range of the experimental data or ab initio points. This talk will describe some recent developments in this area, and make a case for what is arguably the `best' general-purpose analytic potential function form now available. Applications to both diatomic molecules and simple polyatomic molecules will be discussed. footnote
NASA Astrophysics Data System (ADS)
Graur, Or; Zurek, David R.; Rest, Armin; Seitenzahl, Ivo R.; Shappee, Benjamin J.; Fisher, Robert; Guillochon, James; Shara, Michael M.; Riess, Adam G.
2018-06-01
The late-time light curves of Type Ia supernovae (SNe Ia), observed >900 days after explosion, present the possibility of a new diagnostic for SN Ia progenitor and explosion models. First, however, we must discover what physical process (or processes) leads to the slow-down of the light curve relative to a pure 56Co decay, as observed in SNe 2011fe, 2012cg, and 2014J. We present Hubble Space Telescope observations of SN 2015F, taken ≈600–1040 days past maximum light. Unlike those of the three other SNe Ia, the light curve of SN 2015F remains consistent with being powered solely by the radioactive decay of 56Co. We fit the light curves of these four SNe Ia in a consistent manner and measure possible correlations between the light-curve stretch—a proxy for the intrinsic luminosity of the SN—and the parameters of the physical model used in the fit. We propose a new, late-time Phillips-like correlation between the stretch of the SNe and the shape of their late-time light curves, which we parameterize as the difference between their pseudo-bolometric luminosities at 600 and 900 days: ΔL 900 = log(L 600/L 900). Our analysis is based on only four SNe, so a larger sample is required to test the validity of this correlation. If true, this model-independent correlation provides a new way to test which physical process lies behind the slow-down of SN Ia light curves >900 days after explosion, and, ultimately, fresh constraints on the various SN Ia progenitor and explosion models.
REFLECTED LIGHT CURVES, SPHERICAL AND BOND ALBEDOS OF JUPITER- AND SATURN-LIKE EXOPLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyudina, Ulyana; Kopparla, Pushkar; Ingersoll, Andrew P.
Reflected light curves observed for exoplanets indicate that a few of them host bright clouds. We estimate how the light curve and total stellar heating of a planet depends on forward and backward scattering in the clouds based on Pioneer and Cassini spacecraft images of Jupiter and Saturn. We fit analytical functions to the local reflected brightnesses of Jupiter and Saturn depending on the planet’s phase. These observations cover broadbands at 0.59–0.72 and 0.39–0.5 μ m, and narrowbands at 0.938 (atmospheric window), 0.889 (CH4 absorption band), and 0.24–0.28 μ m. We simulate the images of the planets with a ray-tracingmore » model, and disk-integrate them to produce the full-orbit light curves. For Jupiter, we also fit the modeled light curves to the observed full-disk brightness. We derive spherical albedos for Jupiter and Saturn, and for planets with Lambertian and Rayleigh-scattering atmospheres. Jupiter-like atmospheres can produce light curves that are a factor of two fainter at half-phase than the Lambertian planet, given the same geometric albedo at transit. The spherical albedo is typically lower than for a Lambertian planet by up to a factor of ∼1.5. The Lambertian assumption will underestimate the absorption of the stellar light and the equilibrium temperature of the planetary atmosphere. We also compare our light curves with the light curves of solid bodies: the moons Enceladus and Callisto. Their strong backscattering peak within a few degrees of opposition (secondary eclipse) can lead to an even stronger underestimate of the stellar heating.« less
Shu-Jiang, Liu; Zhan-Ying, Chen; Yin-Zhong, Chang; Shi-Lian, Wang; Qi, Li; Yuan-Qing, Fan
2013-10-11
Multidimensional gas chromatography is widely applied to atmospheric xenon monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). To improve the capability for xenon sampling from the atmosphere, sampling techniques have been investigated in detail. The sampling techniques are designed by xenon outflow curves which are influenced by many factors, and the injecting condition is one of the key factors that could influence the xenon outflow curves. In this paper, the xenon outflow curves of single-pulse injection in two-dimensional gas chromatography has been tested and fitted as a function of exponential modified Gaussian distribution. An inference formula of the xenon outflow curve for six-pulse injection is derived, and the inference formula is also tested to compare with its fitting formula of the xenon outflow curve. As a result, the curves of both the one-pulse and six-pulse injections obey the exponential modified Gaussian distribution when the temperature of the activated carbon column's temperature is 26°C and the flow rate of the carrier gas is 35.6mLmin(-1). The retention time of the xenon peak for one-pulse injection is 215min, and the peak width is 138min. For the six-pulse injection, however, the retention time is delayed to 255min, and the peak width broadens to 222min. According to the inferred formula of the xenon outflow curve for the six-pulse injection, the inferred retention time is 243min, the relative deviation of the retention time is 4.7%, and the inferred peak width is 225min, with a relative deviation of 1.3%. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mandel, Kaisey; Scolnic, Daniel; Shariff, Hikmatali; Foley, Ryan; Kirshner, Robert
2017-01-01
Inferring peak optical absolute magnitudes of Type Ia supernovae (SN Ia) from distance-independent measures such as their light curve shapes and colors underpins the evidence for cosmic acceleration. SN Ia with broader, slower declining optical light curves are more luminous (“broader-brighter”) and those with redder colors are dimmer. But the “redder-dimmer” color-luminosity relation widely used in cosmological SN Ia analyses confounds its two separate physical origins. An intrinsic correlation arises from the physics of exploding white dwarfs, while interstellar dust in the host galaxy also makes SN Ia appear dimmer and redder. Conventional SN Ia cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (MB vs. B-V) slope βint differs from the host galaxy dust law RB, this convolution results in a specific curve of mean extinguished absolute magnitude vs. apparent color. The derivative of this curve smoothly transitions from βint in the blue tail to RB in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope βapp between βint and RB. We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a dataset of SALT2 optical light curve fits of 277 nearby SN Ia at z < 0.10. The conventional linear fit obtains βapp ≈ 3. Our model finds a βint = 2.2 ± 0.3 and a distinct dust law of RB = 3.7 ± 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ~0.10 mag in the tails of the apparent color distribution. This research is supported by NSF grants AST-156854, AST-1211196, and NASA grant NNX15AJ55G.
Bonnet, Benjamin; Jourdan, Franck; du Cailar, Guilhem; Fesler, Pierre
2017-08-01
End-systolic left ventricular (LV) elastance ( E es ) has been previously calculated and validated invasively using LV pressure-volume (P-V) loops. Noninvasive methods have been proposed, but clinical application remains complex. The aims of the present study were to 1 ) estimate E es according to modeling of the LV P-V curve during ejection ("ejection P-V curve" method) and validate our method with existing published LV P-V loop data and 2 ) test the clinical applicability of noninvasively detecting a difference in E es between normotensive and hypertensive subjects. On the basis of the ejection P-V curve and a linear relationship between elastance and time during ejection, we used a nonlinear least-squares method to fit the pressure waveform. We then computed the slope and intercept of time-varying elastance as well as the volume intercept (V 0 ). As a validation, 22 P-V loops obtained from previous invasive studies were digitized and analyzed using the ejection P-V curve method. To test clinical applicability, ejection P-V curves were obtained from 33 hypertensive subjects and 32 normotensive subjects with carotid tonometry and real-time three-dimensional echocardiography during the same procedure. A good univariate relationship ( r 2 = 0.92, P < 0.005) and good limits of agreement were found between the invasive calculation of E es and our new proposed ejection P-V curve method. In hypertensive patients, an increase in arterial elastance ( E a ) was compensated by a parallel increase in E es without change in E a / E es In addition, the clinical reproducibility of our method was similar to that of another noninvasive method. In conclusion, E es and V 0 can be estimated noninvasively from modeling of the P-V curve during ejection. This approach was found to be reproducible and sensitive enough to detect an expected increase in LV contractility in hypertensive patients. Because of its noninvasive nature, this methodology may have clinical implications in various disease states. NEW & NOTEWORTHY The use of real-time three-dimensional echocardiography-derived left ventricular volumes in conjunction with carotid tonometry was found to be reproducible and sensitive enough to detect expected differences in left ventricular elastance in arterial hypertension. Because of its noninvasive nature, this methodology may have clinical implications in various disease states. Copyright © 2017 the American Physiological Society.
The S-curve for forecasting waste generation in construction projects.
Lu, Weisheng; Peng, Yi; Chen, Xi; Skitmore, Martin; Zhang, Xiaoling
2016-10-01
Forecasting construction waste generation is the yardstick of any effort by policy-makers, researchers, practitioners and the like to manage construction and demolition (C&D) waste. This paper develops and tests an S-curve model to indicate accumulative waste generation as a project progresses. Using 37,148 disposal records generated from 138 building projects in Hong Kong in four consecutive years from January 2011 to June 2015, a wide range of potential S-curve models are examined, and as a result, the formula that best fits the historical data set is found. The S-curve model is then further linked to project characteristics using artificial neural networks (ANNs) so that it can be used to forecast waste generation in future construction projects. It was found that, among the S-curve models, cumulative logistic distribution is the best formula to fit the historical data. Meanwhile, contract sum, location, public-private nature, and duration can be used to forecast construction waste generation. The study provides contractors with not only an S-curve model to forecast overall waste generation before a project commences, but also with a detailed baseline to benchmark and manage waste during the course of construction. The major contribution of this paper is to the body of knowledge in the field of construction waste generation forecasting. By examining it with an S-curve model, the study elevates construction waste management to a level equivalent to project cost management where the model has already been readily accepted as a standard tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
2017-01-01
Modeling of microbial inactivation by high hydrostatic pressure (HHP) requires a plot of the log microbial count or survival ratio versus time data under a constant pressure and temperature. However, at low pressure and temperature values, very long holding times are needed to obtain measurable inactivation. Since the time has a significant effect on the cost of HHP processing it may be reasonable to fix the time at an appropriate value and quantify the inactivation with respect to pressure. Such a plot is called dose-response curve and it may be more beneficial than the traditional inactivation modeling since short holding times with different pressure values can be selected and used for the modeling of HHP inactivation. For this purpose, 49 dose-response curves (with at least 4 log10 reduction and ≥5 data points including the atmospheric pressure value (P = 0.1 MPa), and with holding time ≤10 min) for HHP inactivation of microorganisms obtained from published studies were fitted with four different models, namely the Discrete model, Shoulder model, Fermi equation, and Weibull model, and the pressure value needed for 5 log10 (P5) inactivation was calculated for all the models above. The Shoulder model and Fermi equation produced exactly the same parameter and P5 values, while the Discrete model produced similar or sometimes the exact same parameter values as the Fermi equation. The Weibull model produced the worst fit (had the lowest adjusted determination coefficient (R2adj) and highest mean square error (MSE) values), while the Fermi equation had the best fit (the highest R2adj and lowest MSE values). Parameters of the models and also P5 values of each model can be useful for the further experimental design of HHP processing and also for the comparison of the pressure resistance of different microorganisms. Further experiments can be done to verify the P5 values at given conditions. The procedure given in this study can also be extended for enzyme inactivation by HHP. PMID:28880255
Improvements in prevalence trend fitting and incidence estimation in EPP 2013
Brown, Tim; Bao, Le; Eaton, Jeffrey W.; Hogan, Daniel R.; Mahy, Mary; Marsh, Kimberly; Mathers, Bradley M.; Puckett, Robert
2014-01-01
Objective: Describe modifications to the latest version of the Joint United Nations Programme on AIDS (UNAIDS) Estimation and Projection Package component of Spectrum (EPP 2013) to improve prevalence fitting and incidence trend estimation in national epidemics and global estimates of HIV burden. Methods: Key changes made under the guidance of the UNAIDS Reference Group on Estimates, Modelling and Projections include: availability of a range of incidence calculation models and guidance for selecting a model; a shift to reporting the Bayesian median instead of the maximum likelihood estimate; procedures for comparison and validation against reported HIV and AIDS data; incorporation of national surveys as an integral part of the fitting and calibration procedure, allowing survey trends to inform the fit; improved antenatal clinic calibration procedures in countries without surveys; adjustment of national antiretroviral therapy reports used in the fitting to include only those aged 15–49 years; better estimates of mortality among people who inject drugs; and enhancements to speed fitting. Results: The revised models in EPP 2013 allow closer fits to observed prevalence trend data and reflect improving understanding of HIV epidemics and associated data. Conclusion: Spectrum and EPP continue to adapt to make better use of the existing data sources, incorporate new sources of information in their fitting and validation procedures, and correct for quantifiable biases in inputs as they are identified and understood. These adaptations provide countries with better calibrated estimates of incidence and prevalence, which increase epidemic understanding and provide a solid base for program and policy planning. PMID:25406747
Automated Estimation of the Orbital Parameters of Jupiter's Moons
NASA Astrophysics Data System (ADS)
Western, Emma; Ruch, Gerald T.
2016-01-01
Every semester the Physics Department at the University of St. Thomas has the Physics 104 class complete a Jupiter lab. This involves taking around twenty images of Jupiter and its moons with the telescope at the University of St. Thomas Observatory over the course of a few nights. The students then take each image and find the distance from each moon to Jupiter and plot the distances versus the elapsed time for the corresponding image. Students use the plot to fit four sinusoidal curves of the moons of Jupiter. I created a script that automates this process for the professor. It takes the list of images and creates a region file used by the students to measure the distance from the moons to Jupiter, a png image that is the graph of all the data points and the fitted curves of the four moons, and a csv file that contains the list of images, the date and time each image was taken, the elapsed time since the first image, and the distances to Jupiter for Io, Europa, Ganymede, and Callisto. This is important because it lets the professor spend more time working with the students and answering questions as opposed to spending time fitting the curves of the moons on the graph, which can be time consuming.