Sample records for curve fitting techniques

  1. Fitting Richards' curve to data of diverse origins

    USGS Publications Warehouse

    Johnson, D.H.; Sargeant, A.B.; Allen, S.H.

    1975-01-01

    Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.

  2. Curve fitting air sample filter decay curves to estimate transuranic content.

    PubMed

    Hayes, Robert B; Chiou, Hung Cheng

    2004-01-01

    By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.

  3. Fitting the curve in Excel®: Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NASA Astrophysics Data System (ADS)

    McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.

    2017-03-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.

  4. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    NASA Technical Reports Server (NTRS)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  5. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  6. Decomposition of mineral absorption bands using nonlinear least squares curve fitting: Application to Martian meteorites and CRISM data

    NASA Astrophysics Data System (ADS)

    Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.

    2011-04-01

    This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.

  7. On the Use of Nonparametric Item Characteristic Curve Estimation Techniques for Checking Parametric Model Fit

    ERIC Educational Resources Information Center

    Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey

    2009-01-01

    The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…

  8. Comparison of the A-Cc curve fitting methods in determining maximum ribulose 1.5-bisphosphate carboxylase/oxygenase carboxylation rate, potential light saturated electron transport rate and leaf dark respiration.

    PubMed

    Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei

    2009-02-01

    A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.

  9. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  10. LASR-Guided Variability Subtraction: The Linear Algorithm for Significance Reduction of Stellar Seismic Activity

    NASA Astrophysics Data System (ADS)

    Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.

    2017-10-01

    Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.

  11. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  12. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  13. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  14. Fitting Prony Series To Data On Viscoelastic Materials

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1995-01-01

    Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.

  15. Hybrid Micro-Electro-Mechanical Tunable Filter

    DTIC Science & Technology

    2007-09-01

    Figure 2.10), one can see the developers have used surface micromachining techniques to build the micromirror structure over the CMOS addressing...DBRs, microcavity composition, initial air gap, contact layers, substrate Dispersion Data Curve -fit dispersion data or generate dispersion function...measurements • Curve -fit the dispersion data or generate a continuous, wavelength-dependent, representation of material dispersion • Manually design the

  16. Analysis of Learning Curve Fitting Techniques.

    DTIC Science & Technology

    1987-09-01

    1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied

  17. Significantly Reduced Blood Pressure Measurement Variability for Both Normotensive and Hypertensive Subjects: Effect of Polynomial Curve Fitting of Oscillometric Pulses

    PubMed Central

    Zhu, Mingping; Chen, Aiqing

    2017-01-01

    This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580

  18. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  19. A curve fitting method for solving the flutter equation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Cooper, J. L.

    1972-01-01

    A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.

  20. Curve fits of predicted inviscid stagnation-point radiative heating rates, cooling factors, and shock standoff distances for hyperbolic earth entry

    NASA Technical Reports Server (NTRS)

    Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.

    1974-01-01

    Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.

  1. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  2. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  3. The training and learning process of transseptal puncture using a modified technique.

    PubMed

    Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom

    2013-12-01

    As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.

  4. Estimating sunspot number

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.

    1984-01-01

    An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.

  5. Focusing of light through turbid media by curve fitting optimization

    NASA Astrophysics Data System (ADS)

    Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi

    2016-12-01

    The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.

  6. Runoff potentiality of a watershed through SCS and functional data analysis technique.

    PubMed

    Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z

    2014-01-01

    Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.

  7. Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique

    PubMed Central

    Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.

    2014-01-01

    Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911

  8. Photometric Supernova Classification with Machine Learning

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  9. Temporal binning of time-correlated single photon counting data improves exponential decay fits and imaging speed

    PubMed Central

    Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.

    2016-01-01

    Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663

  10. An interactive user-friendly approach to surface-fitting three-dimensional geometries

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Dejarnette, Fred R.

    1988-01-01

    A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.

  11. Videodensitometric Methods for Cardiac Output Measurements

    NASA Astrophysics Data System (ADS)

    Mischi, Massimo; Kalker, Ton; Korsten, Erik

    2003-12-01

    Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.

  12. Modeling and Maximum Likelihood Fitting of Gamma-Ray and Radio Light Curves of Millisecond Pulsars Detected with Fermi

    NASA Technical Reports Server (NTRS)

    Johnson, T. J.; Harding, A. K.; Venter, C.

    2012-01-01

    Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.

  13. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    PubMed

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  14. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  15. TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Shi, J; Yang, Y

    Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less

  16. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less

  17. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    NASA Astrophysics Data System (ADS)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  18. Note: Index of refraction measurement using the Fresnel equations.

    PubMed

    McClymer, J P

    2014-08-01

    The real part of the refractive index is measured from 1.30 to above 3.00 without the use of index matching fluids. This approach expands upon the Brewster angle technique as both S and P polarized lights are used and the full Fresnel equations fitted to the data to extract the index of refraction using nonlinear curve fitting.

  19. Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.

    PubMed

    Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J

    2010-12-01

    Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Visualization and curve-parameter estimation strategies for efficient exploration of phenotype microarray kinetics.

    PubMed

    Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.

  1. Visualization and Curve-Parameter Estimation Strategies for Efficient Exploration of Phenotype Microarray Kinetics

    PubMed Central

    Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335

  2. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    PubMed Central

    Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888

  3. JMFA2—a graphically interactive Java program that fits microfibril angle X-ray diffraction data

    Treesearch

    Steve P. Verrill; David E. Kretschmann; Victoria L. Herian

    2006-01-01

    X-ray diffraction techniques have the potential to decrease the time required to determine microfibril angles dramatically. In this paper, we discuss the latest version of a curve-fitting toll that permits us to reduce the time required to evaluate MFA X-ray diffraction patterns. Further, because this tool reflects the underlying physics more accurately than existing...

  4. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1987-01-01

    New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.

  5. Ultrasonic velocity profiling rheometry based on a widened circular Couette flow

    NASA Astrophysics Data System (ADS)

    Shiratori, Takahisa; Tasaka, Yuji; Oishi, Yoshihiko; Murai, Yuichi

    2015-08-01

    We propose a new rheometry for characterizing the rheological properties of fluids. The technique produces flow curves, which represent the relationship between the fluid shear rate and shear stress. Flow curves are obtained by measuring the circumferential velocity distribution of tested fluids in a circular Couette system, using an ultrasonic velocity profiling technique. By adopting a widened gap of concentric cylinders, a designed range of the shear rate is obtained so that velocity profile measurement along a single line directly acquires flow curves. To reduce the effect of ultrasonic noise on resultant flow curves, several fitting functions and variable transforms are examined to best approximate the velocity profile without introducing a priori rheological models. Silicone oil, polyacrylamide solution, and yogurt were used to evaluate the applicability of this technique. These substances are purposely targeted as examples of Newtonian fluids, shear thinning fluids, and opaque fluids with unknown rheological properties, respectively. We find that fourth-order Chebyshev polynomials provide the most accurate representation of flow curves in the context of model-free rheometry enabled by ultrasonic velocity profiling.

  6. Inference and analysis of xenon outflow curves under multi-pulse injection in two-dimensional chromatography.

    PubMed

    Shu-Jiang, Liu; Zhan-Ying, Chen; Yin-Zhong, Chang; Shi-Lian, Wang; Qi, Li; Yuan-Qing, Fan

    2013-10-11

    Multidimensional gas chromatography is widely applied to atmospheric xenon monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). To improve the capability for xenon sampling from the atmosphere, sampling techniques have been investigated in detail. The sampling techniques are designed by xenon outflow curves which are influenced by many factors, and the injecting condition is one of the key factors that could influence the xenon outflow curves. In this paper, the xenon outflow curves of single-pulse injection in two-dimensional gas chromatography has been tested and fitted as a function of exponential modified Gaussian distribution. An inference formula of the xenon outflow curve for six-pulse injection is derived, and the inference formula is also tested to compare with its fitting formula of the xenon outflow curve. As a result, the curves of both the one-pulse and six-pulse injections obey the exponential modified Gaussian distribution when the temperature of the activated carbon column's temperature is 26°C and the flow rate of the carrier gas is 35.6mLmin(-1). The retention time of the xenon peak for one-pulse injection is 215min, and the peak width is 138min. For the six-pulse injection, however, the retention time is delayed to 255min, and the peak width broadens to 222min. According to the inferred formula of the xenon outflow curve for the six-pulse injection, the inferred retention time is 243min, the relative deviation of the retention time is 4.7%, and the inferred peak width is 225min, with a relative deviation of 1.3%. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Recalcitrant vulnerability curves: methods of analysis and the concept of fibre bridges for enhanced cavitation resistance.

    PubMed

    Cai, Jing; Li, Shan; Zhang, Haixin; Zhang, Shuoxin; Tyree, Melvin T

    2014-01-01

    Vulnerability curves (VCs) generally can be fitted to the Weibull equation; however, a growing number of VCs appear to be recalcitrant, that is, deviate from a Weibull but seem to fit dual Weibull curves. We hypothesize that dual Weibull curves in Hippophae rhamnoides L. are due to different vessel diameter classes, inter-vessel hydraulic connections or vessels versus fibre tracheids. We used dye staining techniques, hydraulic measurements and quantitative anatomy measurements to test these hypotheses. The fibres contribute 1.3% of the total stem conductivity, which eliminates the hypothesis that fibre tracheids account for the second Weibull curve. Nevertheless, the staining pattern of vessels and fibre tracheids suggested that fibres might function as a hydraulic bridge between adjacent vessels. We also argue that fibre bridges are safer than vessel-to-vessel pits and put forward the concept as a new paradigm. Hence, we tentatively propose that the first Weibull curve may be accounted by vessels connected to each other directly by pit fields, while the second Weibull curve is associated with vessels that are connected almost exclusively by fibre bridges. Further research is needed to test the concept of fibre bridge safety in species that have recalcitrant or normal Weibull curves. © 2013 John Wiley & Sons Ltd.

  8. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  9. Enrollment Projection within a Decision-Making Framework.

    ERIC Educational Resources Information Center

    Armstrong, David F.; Nunley, Charlene Wenckowski

    1981-01-01

    Two methods used to predict enrollment at Montgomery College in Maryland are compared and evaluated, and the administrative context in which they are used is considered. The two methods involve time series analysis (curve fitting) and indicator techniques (yield from components). (MSE)

  10. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  11. Dynamic Analysis of Recalescence Process and Interface Growth of Eutectic Fe82B17Si1 Alloy

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Liu, A. M.; Chen, Z.; Li, P. Z.; Zhang, C. H.

    2018-03-01

    By employing the glass fluxing technique in combination with cyclical superheating, the microstructural evolution of the undercooled Fe82B17Si1 alloy in the obtained undercooling range was studied. With increase in undercooling, a transition of cooling curves was detected from one recalescence to two recalescences, followed by one recalescence. The two types of cooling curves were fitted by the break equation and the Johnson-Mehl-Avrami-Kolmogorov model. Based on the cooling curves at different undercoolings, the recalescence rate was calculated by the multi-logistic growth model and the Boettinger-Coriel-Trivedi model. Both the recalescence features and the interface growth kinetics of the eutectic Fe82B17Si1 alloy were explored. The fitting results that were obtained using TEM (SAED), SEM and XRD were consistent with the changing rule of microstructures. Finally, the relationship between the microstructure and hardness was also investigated.

  12. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, N. E.; Soderberg, A. M.; Betancourt, M., E-mail: nsanders@cfa.harvard.edu

    2015-02-10

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. Wemore » present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.« less

  13. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1986-01-01

    New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).

  14. On the cost of approximating and recognizing a noise perturbed straight line or a quadratic curve segment in the plane. [central processing units

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.; Yalabik, N.

    1975-01-01

    Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.

  15. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torello, David; Kim, Jin-Yeon; Qu, Jianmin

    2015-03-31

    This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less

  16. Three-dimensional simulation of human teeth and its application in dental education and research.

    PubMed

    Koopaie, Maryam; Kolahdouz, Sajad

    2016-01-01

    Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.

  17. Three-dimensional simulation of human teeth and its application in dental education and research

    PubMed Central

    Koopaie, Maryam; Kolahdouz, Sajad

    2016-01-01

    Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836

  18. Detecting time-specific differences between temporal nonlinear curves: Analyzing data from the visual world paradigm

    PubMed Central

    Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant

    2015-01-01

    In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088

  19. Observing Globular Cluster RR Lyrae Variables with the BYU West Mountain Observatory

    NASA Astrophysics Data System (ADS)

    Jeffery, E. J.; Joner, M. D.

    2016-06-01

    We have utilized the 0.9-meter telescope of the Brigham Young University West Mountain Observatory to secure data on six northern hemisphere globular clusters. Here we present representative observations of RR Lyrae stars located in these clusters, including light curves. We compare light curves produced using both DAOPHOT and ISIS software packages. Light curve fitting is done with FITLC. We find that for well-separated stars, DAOPHOT and ISIS provide comparable results. However, for stars within the cluster core, ISIS provides superior results. These improved techniques will allow us to better measure the properties of cluster variable stars.

  20. Measurement of N-Type 6H SiC Minority-Carrier Diffusion Lengths by Electron Bombardment of Schottky Barriers

    NASA Technical Reports Server (NTRS)

    Hubbard, S. M.; Tabib-Azar, M.; Balley, S.; Rybickid, G.; Neudeck, P.; Raffaelle, R.

    2004-01-01

    Minority-Carrier diffusion lengths of n-type 6H-SiC were measured using the electron-beam induced current (EBIC) technique. Experimental values of primary beam current, EBIC, and beam voltage were obtained for a variety of SIC samples. This data was used to calculate experimental diode efficiency vs. beam voltage curves. These curves were fit to theoretically calculated efficiency curves, and the diffusion length and metal layer thickness were extracted. The hole diffusion length in n-6H SiC ranged from 0.93 +/- 0.15 microns.

  1. Observing RR Lyrae Variables in the M3 Globular Cluster with the BYU West Mountain Observatory (Abstract)

    NASA Astrophysics Data System (ADS)

    Joner, M. D.

    2016-06-01

    (Abstract only) We have utilized the 0.9-meter telescope of the Brigham Young University West Mountain Observatory to secure data on the northern hemisphere globular cluster NGC 5272 (M3). We made 216 observations in the V filter spaced between March and August 2012. We present light curves of the M3 RR Lyrae stars using different techniques. We compare light curves produced using DAOPHOT and ISIS software packages for stars in both the halo and core regions of this globular cluster. The light curve fitting is done using FITLC.

  2. Compact multi-band fluorescent microscope with an electrically tunable lens for autofocusing

    PubMed Central

    Wang, Zhaojun; Lei, Ming; Yao, Baoli; Cai, Yanan; Liang, Yansheng; Yang, Yanlong; Yang, Xibin; Li, Hui; Xiong, Daxi

    2015-01-01

    Autofocusing is a routine technique in redressing focus drift that occurs in time-lapse microscopic image acquisition. To date, most automatic microscopes are designed on the distance detection scheme to fulfill the autofocusing operation, which may suffer from the low contrast of the reflected signal due to the refractive index mismatch at the water/glass interface. To achieve high autofocusing speed with minimal motion artifacts, we developed a compact multi-band fluorescent microscope with an electrically tunable lens (ETL) device for autofocusing. A modified searching algorithm based on equidistant scanning and curve fitting is proposed, which no longer requires a single-peak focus curve and then efficiently restrains the impact of external disturbance. This technique enables us to achieve an autofocusing time of down to 170 ms and the reproductivity of over 97%. The imaging head of the microscope has dimensions of 12 cm × 12 cm × 6 cm. This portable instrument can easily fit inside standard incubators for real-time imaging of living specimens. PMID:26601001

  3. Trend analyses with river sediment rating curves

    USGS Publications Warehouse

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  4. Wall shear stress effects of different endodontic irrigation techniques and systems.

    PubMed

    Goode, Narisa; Khan, Sara; Eid, Ashraf A; Niu, Li-na; Gosier, Johnny; Susin, Lisiane F; Pashley, David H; Tay, Franklin R

    2013-07-01

    This study examined débridement efficacy as a result of wall shear stresses created by different irrigant delivery/agitation techniques in an inaccessible recess of a curved root canal model. A reusable, curved canal cavity containing a simulated canal fin was milled into mirrored titanium blocks. Calcium hydroxide (Ca(OH)2) paste was used as debris and loaded into the canal fin. The titanium blocks were bolted together to provide a fluid-tight seal. Sodium hypochlorite was delivered at a previously-determined flow rate of 1 mL/min that produced either negligible or no irrigant extrusion pressure into the periapex for all the techniques examined. Nine irrigation delivery/agitation techniques were examined: NaviTip passive irrigation control, Max-i-Probe(®) side-vented needle passive irrigation, manual dynamic agitation (MDA) using non-fitting and well-fitting gutta-percha points, EndoActivator™ sonic agitation with medium and large points, VPro™ EndoSafe™ irrigation system, VPro™ StreamClean™ continuous ultrasonic irrigation and EndoVac apical negative pressure irrigation. Débridement efficacies were analysed with Kruskal-Wallis ANOVA and Dunn's multiple comparisons tests (α=0.05). EndoVac was the only technique that removed more than 99% calcium hydroxide debris from the canal fin at the predefined flow rate. This group was significantly different (p<0.05) from the other groups that exhibited incomplete Ca(OH)2 removal. The ability of the EndoVac system to significantly clean more debris from a mechanically inaccessible recess of the model curved root canal may be caused by robust bubble formation during irrigant delivery, creating higher wall shear stresses by a two-phase air-liquid flow phenomenon that is well known in other industrial débridement systems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Raster and vector processing for scanned linework

    USGS Publications Warehouse

    Greenlee, David D.

    1987-01-01

    An investigation of raster editing techniques, including thinning, filling, and node detecting, was performed by using specialized software. The techniques were based on encoding the state of the 3-by-3 neighborhood surrounding each pixel into a single byte. A prototypical method for converting the edited raster linkwork into vectors was also developed. Once vector representations of the lines were formed, they were formatted as a Digital Line Graph, and further refined by deletion of nonessential vertices and by smoothing with a curve-fitting technique.

  6. Sediment data sources and estimated annual suspended-sediment loads of rivers and streams in Colorado

    USGS Publications Warehouse

    Elliott, J.G.; DeFeyter, K.L.

    1986-01-01

    Sources of sediment data collected by several government agencies through water year 1984 are summarized for Colorado. The U.S. Geological Survey has collected suspended-sediment data at 243 sites; these data are stored in the U.S. Geological Survey 's water data storage and retrieval system. The U.S. Forest Service has collected suspended-sediment and bedload data at an additional 225 sites, and most of these data are stored in the U.S. Environmental Protection Agency 's water-quality-control information system. Additional unpublished sediment data are in the possession of the collecting entities. Annual suspended-sediment loads were computed for 133 U.S. Geological Survey sediment-data-collection sites using the daily mean water-discharge/sediment-transport-curve method. Sediment-transport curves were derived for each site by one of three techniques: (1) Least-squares linear regression of all pairs of suspended-sediment and corresponding water-discharge data, (2) least-squares linear regression of data sets subdivided on the basis of hydrograph season; and (3) graphical fit to a logarithm-logarithm plot of data. The curve-fitting technique used for each site depended on site-specific characteristics. Sediment-data sources and estimates of annual loads of suspended, bed, and total sediment from several other reports also are summarized. (USGS)

  7. Point and path performance of light aircraft: A review and analysis

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Summey, D. C.; Johnson, W. D.

    1973-01-01

    The literature on methods for predicting the performance of light aircraft is reviewed. The methods discussed in the review extend from the classical instantaneous maximum or minimum technique to techniques for generating mathematically optimum flight paths. Classical point performance techniques are shown to be adequate in many cases but their accuracies are compromised by the need to use simple lift, drag, and thrust relations in order to get closed form solutions. Also the investigation of the effect of changes in weight, altitude, configuration, etc. involves many essentially repetitive calculations. Accordingly, computer programs are provided which can fit arbitrary drag polars and power curves with very high precision and which can then use the resulting fits to compute the performance under the assumption that the aircraft is not accelerating.

  8. EMPIRICALLY ESTIMATED FAR-UV EXTINCTION CURVES FOR CLASSICAL T TAURI STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, Matthew; France, Kevin; Schindhelm, Eric

    Measurements of extinction curves toward young stars are essential for calculating the intrinsic stellar spectrophotometric radiation. This flux determines the chemical properties and evolution of the circumstellar region, including the environment in which planets form. We develop a new technique using H{sub 2} emission lines pumped by stellar Ly α photons to characterize the extinction curve by comparing the measured far-ultraviolet H{sub 2} line fluxes with model H{sub 2} line fluxes. The difference between model and observed fluxes can be attributed to the dust attenuation along the line of sight through both the interstellar and circumstellar material. The extinction curvesmore » are fit by a Cardelli et al. (1989) model and the A {sub V} (H{sub 2}) for the 10 targets studied with good extinction fits range from 0.5 to 1.5 mag, with R {sub V} values ranging from 2.0 to 4.7. A {sub V} and R {sub V} are found to be highly degenerate, suggesting that one or the other needs to be calculated independently. Column densities and temperatures for the fluorescent H{sub 2} populations are also determined, with averages of log{sub 10}( N (H{sub 2})) = 19.0 and T = 1500 K. This paper explores the strengths and limitations of the newly developed extinction curve technique in order to assess the reliability of the results and improve the method in the future.« less

  9. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  10. SU-E-T-75: A Simple Technique for Proton Beam Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgdorf, B; Kassaee, A; Garver, E

    2015-06-15

    Purpose: To develop a measurement-based technique to verify the range of proton beams for quality assurance (QA). Methods: We developed a simple technique to verify the proton beam range with in-house fabricated devices. Two separate devices were fabricated; a clear acrylic rectangular cuboid and a solid polyvinyl chloride (PVC) step wedge. For efficiency in our clinic, we used the rectangular cuboid for double scattering (DS) beams and the step wedge for pencil beam scanning (PBS) beams. These devices were added to our QA phantom to measure dose points along the distal fall-off region (between 80% and 20%) in addition tomore » dose at mid-SOBP (spread out Bragg peak) using a two-dimensional parallel plate chamber array (MatriXX™, IBA Dosimetry, Schwarzenbruck, Germany). This method relies on the fact that the slope of the distal fall-off is linear and does not vary with small changes in energy. Using a multi-layer ionization chamber (Zebra™, IBA Dosimetry), percent depth dose (PDD) curves were measured for our standard daily QA beams. The range (energy) for each beam was then varied (i.e. ±2mm and ±5mm) and additional PDD curves were measured. The distal fall-off of all PDD curves was fit to a linear equation. The distal fall-off measured dose for a particular beam was used in our linear equation to determine the beam range. Results: The linear fit of the fall-off region for the PDD curves, when varying the range by a few millimeters for a specific QA beam, yielded identical slopes. The calculated range based on measured point dose(s) in the fall-off region using the slope resulted in agreement of ±1mm of the expected beam range. Conclusion: We developed a simple technique for accurately verifying the beam range for proton therapy QA programs.« less

  11. Can Tooth Preparation Design Affect the Fit of CAD/CAM Restorations?

    PubMed

    Roperto, Renato Cassio; Oliveira, Marina Piolli; Porto, Thiago Soares; Ferreira, Lais Alaberti; Melo, Lucas Simino; Akkus, Anna

    2017-03-01

    The purpose of this study was to evaluate if the marginal fit of computer-aided design and computer-aided manufacturing (CAD/CAM) restorations manufactured with CAD/CAM systems can be affected by different tooth preparation designs. Twenty-six typodont (plastic) teeth were divided into two groups (n = 13) according to the occlusal curvature of the tooth preparation. These were the group 1 (control group) (flat occlusal design) and group 2 (curved occlusal design). Scanning of the preparations was performed, and crowns were milled using ceramic blocks. Blocks were cemented using epoxy glue on the pulpal floor only, and finger pressure was applied for 1 minute. On completion of the cementation step, poor fits between the restoration and abutment were measured by microphotography and the silicone replica technique using light-body silicon material on mesial, distal, buccal, and lingual surfaces. Two-way ANOVA analysis did not reveal a statistical difference between flat (83.61 ± 50.72) and curved (79.04 ± 30.97) preparation designs. Buccal, mesial, lingual, and distal sites on the curved design preparation showed less of a gap when compared with flat design. No difference was found on flat preparations among mesial, buccal, and distal sites (P < .05). The lingual aspect had no difference from the distal side but showed a statistically significant difference from mesial and buccal (P < .05). Difference in occlusal design did not significantly impact the marginal fit. Marginal fit was significantly affected by the location of the margin; lingual and distal locations exhibited greater margin gap values compared with buccal and mesial sites regardless of the preparation design.

  12. [Comparison among various software for LMS growth curve fitting methods].

    PubMed

    Han, Lin; Wu, Wenhong; Wei, Qiuxia

    2015-03-01

    To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.

  13. Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.

    PubMed

    VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T

    2017-06-01

    The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.

  14. On the convexity of ROC curves estimated from radiological test results.

    PubMed

    Pesce, Lorenzo L; Metz, Charles E; Berbaum, Kevin S

    2010-08-01

    Although an ideal observer's receiver operating characteristic (ROC) curve must be convex-ie, its slope must decrease monotonically-published fits to empirical data often display "hooks." Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This article aims to identify the practical implications of nonconvex ROC curves and the conditions that can lead to empirical or fitted ROC curves that are not convex. This article views nonconvex ROC curves from historical, theoretical, and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve does not cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any nonconvex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. In general, ROC curve fits that show hooks should be looked on with suspicion unless other arguments justify their presence. 2010 AUR. Published by Elsevier Inc. All rights reserved.

  15. On the convexity of ROC curves estimated from radiological test results

    PubMed Central

    Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.

    2010-01-01

    Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155

  16. Curve fitting methods for solar radiation data modeling

    NASA Astrophysics Data System (ADS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  17. Curve fitting methods for solar radiation data modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less

  18. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  19. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  20. Turbine blade profile design method based on Bezier curves

    NASA Astrophysics Data System (ADS)

    Alexeev, R. A.; Tishchenko, V. A.; Gribin, V. G.; Gavrilov, I. Yu.

    2017-11-01

    In this paper, the technique of two-dimensional parametric blade profile design is presented. Bezier curves are used to create the profile geometry. The main feature of the proposed method is an adaptive approach of curve fitting to given geometric conditions. Calculation of the profile shape is produced by multi-dimensional minimization method with a number of restrictions imposed on the blade geometry.The proposed method has been used to describe parametric geometry of known blade profile. Then the baseline geometry was modified by varying some parameters of the blade. The numerical calculation of obtained designs has been carried out. The results of calculations have shown the efficiency of chosen approach.

  1. An empirical method for determining average soil infiltration rates and runoff, Powder River structural basin, Wyoming

    USGS Publications Warehouse

    Rankl, James G.

    1982-01-01

    This report describes a method to estimate infiltration rates of soils for use in estimating runoff from small basins. Average rainfall intensity is plotted against storm duration on log-log paper. All rainfall events are designated as having either runoff or nonrunoff. A power-decay-type curve is visually fitted to separate the two types of rainfall events. This separation curve is an incipient-ponding curve and its equation describes infiltration parameters for a soil. For basins with more than one soil complex, only the incipient-ponding curve for the soil complex with the lowest infiltration rate can be defined using the separation technique. Incipient-ponding curves for soils with infiltration rates greater than the lowest curve are defined by ranking the soils according to their relative permeabilities and optimizing the curve position. A comparison of results for six basins produced computed total runoff for all events used ranging from 16.6 percent less to 2.3 percent more than measured total runoff. (USGS)

  2. Instrumentation and signal processing for the detection of heavy water using off axis-integrated cavity output spectroscopy technique

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.

    2018-02-01

    An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.

  3. A general dual-bolus approach for quantitative DCE-MRI.

    PubMed

    Kershaw, Lucy E; Cheng, Hai-Ling Margaret

    2011-02-01

    To present a dual-bolus technique for quantitative dynamic contrast-enhanced MRI (DCE-MRI) and show that it can give an arterial input function (AIF) measurement equivalent to that from a single-bolus protocol. Five rabbits were imaged using a dual-bolus technique applicable for high-resolution DCE-MRI, incorporating a time resolved imaging of contrast kinetics (TRICKS) sequence for rapid temporal sampling. AIFs were measured from both the low-dose prebolus and the high-dose main bolus in the abdominal aorta. In one animal, TRICKS and fast spoiled gradient echo (FSPGR) acquisitions were compared. The scaled prebolus AIF was shown to match the main bolus AIF, with 95% confidence intervals overlapping for fits of gamma-variate functions to the first pass and linear fits to the washout phase, with the exception of one case. The AIFs measured using TRICKS and FSPGR were shown to be equivalent in one animal. The proposed technique can capture even the rapid circulation kinetics in the rabbit aorta, and the scaled prebolus AIF is equivalent to the AIF from a high-dose injection. This allows separate measurements of the AIF and tissue uptake curves, meaning that each curve can then be acquired using a protocol tailored to its specific requirements. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.

  5. Crystal growth, structural, low temperature thermoluminescence and mechanical properties of cubic fluoroperovskite single crystal (LiBaF3)

    NASA Astrophysics Data System (ADS)

    Daniel, D. Joseph; Ramasamy, P.; Ramaseshan, R.; Kim, H. J.; Kim, Sunghwan; Bhagavannarayana, G.; Cheon, Jong-Kyu

    2017-10-01

    Polycrystalline compounds of LiBaF3 were synthesized using conventional solid state reaction route and the phase purity was confirmed using powder X-ray diffraction technique. Using vertical Bridgman technique single crystal was grown from melt. Rocking curve measurements have been carried out to study the structural perfection of the grown crystal. The single peak of diffraction curve clearly reveals that the grown crystal was free from the structural grain boundaries. The low temperature thermoluminescence of the X-ray irradiated sample has been analyzed and found four distinguishable peaks having maximum temperatures at 18, 115, 133 and 216 K. Activation energy (E) and frequency factor (s) for the individual peaks have been studied using Peak shape method and the computerized curve fitting method combining with the Tmax- TStop procedure. Nanoindentation technique was employed to study the mechanical behaviour of the crystal. The indentation modulus and Vickers hardness of the grown crystal have values of 135.15 GPa and 680.81 respectively, under the maximum indentation load of 10 mN.

  6. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  7. Risk factors for unsuccessful acetabular press-fit fixation at primary total hip arthroplasty.

    PubMed

    Brulc, U; Antolič, V; Mavčič, B

    2017-11-01

    Surgeon at primary total hip arthroplasty sometimes cannot achieve sufficient cementless acetabular press-fit fixation and must resort to other fixation methods. Despite a predominant use of cementless cups, this issue is not fully clarified, therefore we performed a large retrospective study to: (1) identify risk factors related to patient or implant or surgeon for unsuccessful intraoperative press-fit; (2) check for correlation between surgeons' volume of operated cases and the press-fit success rate. Unsuccessful intra-operative press-fit more often occurs in older female patients, particular implants, due to learning curve and low-volume surgeons. Retrospective observational cohort of prospectively collected intraoperative data (2009-2016) included all primary total hip arthroplasty patients with implant brands that offered acetabular press-fit fixation only. Press-fit was considered successful if acetabulum was of the same implant brand as the femoral component without additional screws or cement. Logistic regression models for unsuccessful acetabular press-fit included patients' gender/age/operated side, implant, surgeon, approach (posterior n=1206, direct-lateral n=871) and surgery date (i.e. learning curve). In 2077 patients (mean 65.5 years, 1093 females, 1163 right hips), three different implant brands (973 ABG-II™-Stryker, 646 EcoFit™ Implantcast, 458 Procotyl™ L-Wright) were implanted by eight surgeons. Their unsuccessful press-fit fixation rates ranged from 3.5% to 23.7%. Older age (odds ratio 1.01 [95% CI: 0.99-1.02]), female gender (2.87 [95% CI: 2.11-3.91]), right side (1.44 [95% CI: 1.08-1.92]), surgery date (0.90 [95% CI: 1.08-1.92]) and particular implants were significant risk factors only in three surgeons with less successful surgical technique (higher rates of unsuccessful press-fit with Procotyl™-L and EcoFit™ [P=0.01]). Direct-lateral hip approach had a lower rate of unsuccessful press-fit than posterior hip approach (P<0.01), but there was no correlation between surgeons' volume and rate of successful press-fit (Spearman's rho=0.10, P=0.82). Subcohort of 961 patients with 5-7-years follow-up indicated higher early/late cup revision rates with unsuccessful press-fit. Success of press-fit fixation depends entirely on the surgeon and surgical approach. With proper operative technique, the unsuccessful press-fit fixation rate should be below 5% and the impact of patients' characteristics or implants on press-fit fixation is then insignificant. Findings of huge variability in operative technique between surgeons of the presented study emphasize the need for surgeon-specific data stratification in arthroplasty studies and indicate the possibility of false attribution of clinically observed phenomena to patient-related factors in pooled data of large centers or hip arthroplasty registers. Level III, retrospective observational case control study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  8. An accurate surface topography restoration algorithm for white light interferometry

    NASA Astrophysics Data System (ADS)

    Yuan, He; Zhang, Xiangchao; Xu, Min

    2017-10-01

    As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.

  9. [Experimental study and correction of the absorption and enhancement effect between Ti, V and Fe].

    PubMed

    Tuo, Xian-Guo; Mu, Ke-Liang; Li, Zhe; Wang, Hong-Hui; Luo, Hui; Yang, Jian-Bo

    2009-11-01

    The absorption and enhancement effects in X-ray fluorescence analysis for Ti, V and Fe elements were studied in the present paper. Three bogus duality systems of Ti-V/Ti-Fe/V-Fe samples were confected and measured by X-ray fluorescence analysis technique using HPGe semiconductor detector, and the relation curve between unitary coefficient (R(K)) of element count rate and element content (W(K)) were obtained after the experiment. Having analyzed the degree of absorption and enhancement effect between every two elements, the authors get the result, and that is the absorption and enhancement effect between Ti and V is relatively distinctness, while it's not so distinctness in Ti-Fe and V-Fe. After that, a mathematics correction method of exponential fitting was used to fit the R(K)-W(K) curve and get a function equation of X-ray fluorescence count rate and content. Three groups of Ti-V duality samples were used to test the fitting method and the relative errors of Ti and V were less than 0.2% as compared to the actual results.

  10. A cloud physics investigation utilizing Skylab data

    NASA Technical Reports Server (NTRS)

    Alishouse, J.; Jacobowitz, H.; Wark, D. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. The Lowtran 2 program, S191 spectral response, and solar spectrum were used to compute the expected absorption by 2.0 micron band for a variety of cloud pressure levels and solar zenith angles. Analysis of the three long wavelength data channels continued in which it was found necessary to impose a minimum radiance criterion. It was also found necessary to modify the computer program to permit the computation of mean values and standard deviations for selected subsets of data on a given tape. A technique for computing the integrated absorption in the A band was devised. The technique normalizes the relative maximum at approximately .78 micron to the solar irradiance curve and then adjusts the relative maximum at approximately .74 micron to fit the solar curve.

  11. The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.

    PubMed

    van Battum, L J; Huizenga, H

    2006-07-01

    Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.

  12. Comparison of Themodynamic and Transport Property Models for Computing Equilibrium High Enthalpy Flows

    NASA Astrophysics Data System (ADS)

    Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik

    2017-11-01

    To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.

  13. Testing Modified Newtonian Dynamics with Low Surface Brightness Galaxies: Rotation Curve FITS

    NASA Astrophysics Data System (ADS)

    de Blok, W. J. G.; McGaugh, S. S.

    1998-11-01

    We present modified Newtonian dynamics (MOND) fits to 15 rotation curves of low surface brightness (LSB) galaxies. Good fits are readily found, although for a few galaxies minor adjustments to the inclination are needed. Reasonable values for the stellar mass-to-light ratios are found, as well as an approximately constant value for the total (gas and stars) mass-to-light ratio. We show that the LSB galaxies investigated here lie on the one, unique Tully-Fisher relation, as predicted by MOND. The scatter on the Tully-Fisher relation can be completely explained by the observed scatter in the total mass-to-light ratio. We address the question of whether MOND can fit any arbitrary rotation curve by constructing a plausible fake model galaxy. While MOND is unable to fit this hypothetical galaxy, a normal dark-halo fit is readily found, showing that dark matter fits are much less selective in producing fits. The good fits to rotation curves of LSB galaxies support MOND, especially because these are galaxies with large mass discrepancies deep in the MOND regime.

  14. Regionalisation of low flow frequency curves for the Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Mamun, Abdullah A.; Hashim, Alias; Daoud, Jamal I.

    2010-02-01

    SUMMARYRegional maps and equations for the magnitude and frequency of 1, 7 and 30-day low flows were derived and are presented in this paper. The river gauging stations of neighbouring catchments that produced similar low flow frequency curves were grouped together. As such, the Peninsular Malaysia was divided into seven low flow regions. Regional equations were developed using the multivariate regression technique. An empirical relationship was developed for mean annual minimum flow as a function of catchment area, mean annual rainfall and mean annual evaporation. The regional equations exhibited good coefficient of determination ( R2 > 0.90). Three low flow frequency curves showing the low, mean and high limits for each region were proposed based on a graphical best-fit technique. Knowing the catchment area, mean annual rainfall and evaporation in the region, design low flows of different durations can be easily estimated for the ungauged catchments. This procedure is expected to overcome the problem of data unavailability in estimating low flows in the Peninsular Malaysia.

  15. Prompt isothermal decay properties of the Sr4Al14O25 co-doped with Eu2+ and Dy3+ persistent luminescent phosphor

    NASA Astrophysics Data System (ADS)

    Asal, Eren Karsu; Polymeris, George S.; Gultekin, Serdar; Kitis, George

    2018-06-01

    Thermoluminescence (TL) techniques are very useful in the research of the persistent Luminescence (PL) phosphors research. It gives information about the existence of energy levels within the forbidden band, its activation energy, kinetic order, lifetime etc. The TL glow curve of Sr4Al14O25 :Eu2+,Dy3+ persistent phosphor, consists of two well separated glow peaks. The TL techniques used to evaluate activation energy were the initial rise, prompt isothermal decay (PID) of TL of each peak at elevated temperatures and the glow - curve fitting. The behavior of the PID curves of the two peak is very different. According to the results of the PID procedure and the subsequent data analysis it is suggested that the mechanism behind the low temperature peak is a delocalized transition. On the other hand the mechanism behind the high temperature peak is localized transition involving a tunneling recombination between electron trap and luminescence center.

  16. Quantitative evaluation method of the threshold adjustment and the flat field correction performances of hybrid photon counting pixel detectors

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Dawiec, A.

    2017-12-01

    A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.

  17. Beta/alpha continuous air monitor

    DOEpatents

    Becker, Gregory K.; Martz, Dowell E.

    1989-01-01

    A single deep layer silicon detector in combination with a microcomputer, recording both alpha and beta activity and the energy of each pulse, distinguishing energy peaks using a novel curve fitting technique to reduce the natural alpha counts in the energy region where plutonium and other transuranic alpha emitters are present, and using a novel algorithm to strip out radon daughter contribution to actual beta counts.

  18. The impact of vessel size on vulnerability curves: data and models for within-species variability in saplings of aspen, Populus tremuloides Michx.

    PubMed

    Cai, Jing; Tyree, Melvin T

    2010-07-01

    The objective of this study was to quantify the relationship between vulnerability to cavitation and vessel diameter within a species. We measured vulnerability curves (VCs: percentage loss hydraulic conductivity versus tension) in aspen stems and measured vessel-size distributions. Measurements were done on seed-grown, 4-month-old aspen (Populus tremuloides Michx) grown in a greenhouse. VCs of stem segments were measured using a centrifuge technique and by a staining technique that allowed a VC to be constructed based on vessel diameter size-classes (D). Vessel-based VCs were also fitted to Weibull cumulative distribution functions (CDF), which provided best-fit values of Weibull CDF constants (c and b) and P(50) = the tension causing 50% loss of hydraulic conductivity. We show that P(50) = 6.166D(-0.3134) (R(2) = 0.995) and that b and 1/c are both linear functions of D with R(2) > 0.95. The results are discussed in terms of models of VCs based on vessel D size-classes and in terms of concepts such as the 'pit area hypothesis' and vessel pathway redundancy.

  19. Modal vector estimation for closely spaced frequency modes

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.; Blair, M.

    1982-01-01

    Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.

  20. Comparative research on activation technique for GaAs photocathodes

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Qian, Yunsheng; Chang, Benkang; Chen, Xinlong; Yang, Rui

    2012-03-01

    The properties of GaAs photocathodes mainly depend on the material design and activation technique. In early researches, high-low temperature two-step activation has been proved to get more quantum efficiency than high-temperature single-step activation. But the variations of surface barriers for two activation techniques have not been well studied, thus the best activation temperature, best Cs-O ratio and best activation time for two-step activation technique have not been well found. Because the surface photovoltage spectroscopy (SPS) before activation is only in connection with the body parameters for GaAs photocathode such as electron diffusion length and the spectral response current (SRC) after activation is in connection with not only body parameters but also surface barriers, thus the surface escape probability (SEP) can be well fitted through the comparative research between SPS before activation and SEP after activation. Through deduction for the tunneling process of surface barriers by Schrödinger equation, the width and height for surface barrier I and II can be well fitted through the curves of SEP. The fitting results were well proved and analyzed by quantitative analysis of angle-dependent X-ray photoelectron spectroscopy (ADXPS) which can also study the surface chemical compositions, atomic concentration percentage and layer thickness for GaAs photocathodes. This comparative research method for fitting parameters of surface barriers through SPS before activation and SRC after activation shows a better real-time in system method for the researches of activation techniques.

  1. Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.; Taylor, Aaron B.

    2009-01-01

    Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…

  2. Revisiting the Estimation of Dinosaur Growth Rates

    PubMed Central

    Myhrvold, Nathan P.

    2013-01-01

    Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133

  3. The Application of Curve Fitting on the Voltammograms of Various Isoforms of Metallothioneins–Metal Complexes

    PubMed Central

    Merlos Rodrigo, Miguel Angel; Molina-López, Jorge; Jimenez Jimenez, Ana Maria; Planells Del Pozo, Elena; Adam, Pavlina; Eckschlager, Tomas; Zitka, Ondrej; Richtera, Lukas; Adam, Vojtech

    2017-01-01

    The translation of metallothioneins (MTs) is one of the defense strategies by which organisms protect themselves from metal-induced toxicity. MTs belong to a family of proteins comprising MT-1, MT-2, MT-3, and MT-4 classes, with multiple isoforms within each class. The main aim of this study was to determine the behavior of MT in dependence on various externally modelled environments, using electrochemistry. In our study, the mass distribution of MTs was characterized using MALDI-TOF. After that, adsorptive transfer stripping technique with differential pulse voltammetry was selected for optimization of electrochemical detection of MTs with regard to accumulation time and pH effects. Our results show that utilization of 0.5 M NaCl, pH 6.4, as the supporting electrolyte provides a highly complicated fingerprint, showing a number of non-resolved voltammograms. Hence, we further resolved the voltammograms exhibiting the broad and overlapping signals using curve fitting. The separated signals were assigned to the electrochemical responses of several MT complexes with zinc(II), cadmium(II), and copper(II), respectively. Our results show that electrochemistry could serve as a great tool for metalloproteomic applications to determine the ratio of metal ion bonds within the target protein structure, however, it provides highly complicated signals, which require further resolution using a proper statistical method, such as curve fitting. PMID:28287470

  4. Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.

    PubMed

    Bandos, Andriy I; Guo, Ben; Gur, David

    2017-02-01

    The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply unreliable AUC-based inferences. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  5. Clarifications regarding the use of model-fitting methods of kinetic analysis for determining the activation energy from a single non-isothermal curve.

    PubMed

    Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M

    2013-02-05

    This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.

  6. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  7. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  8. Bayesian inference in an item response theory model with a generalized student t link function

    NASA Astrophysics Data System (ADS)

    Azevedo, Caio L. N.; Migon, Helio S.

    2012-10-01

    In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.

  9. Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm

    PubMed Central

    Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed

    2008-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581

  10. Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.

    PubMed

    Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed

    2004-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.

  11. Wing Shape Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2015-01-01

    A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.

  12. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    ERIC Educational Resources Information Center

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  13. On representing the prognostic value of continuous gene expression biomarkers with the restricted mean survival curve.

    PubMed

    Eng, Kevin H; Schiller, Emily; Morrell, Kayla

    2015-11-03

    Researchers developing biomarkers for cancer prognosis from quantitative gene expression data are often faced with an odd methodological discrepancy: while Cox's proportional hazards model, the appropriate and popular technique, produces a continuous and relative risk score, it is hard to cast the estimate in clear clinical terms like median months of survival and percent of patients affected. To produce a familiar Kaplan-Meier plot, researchers commonly make the decision to dichotomize a continuous (often unimodal and symmetric) score. It is well known in the statistical literature that this procedure induces significant bias. We illustrate the liabilities of common techniques for categorizing a risk score and discuss alternative approaches. We promote the use of the restricted mean survival (RMS) and the corresponding RMS curve that may be thought of as an analog to the best fit line from simple linear regression. Continuous biomarker workflows should be modified to include the more rigorous statistical techniques and descriptive plots described in this article. All statistics discussed can be computed via standard functions in the Survival package of the R statistical programming language. Example R language code for the RMS curve is presented in the appendix.

  14. A Review of Correlated Noise in Exoplanet Light Curves

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, J.; Blecic, J.; Hardy, R. A.; Hardin, M.

    2013-10-01

    A number of the occultation light curves of exoplanets exhibit time-correlated residuals (a.k.a. correlated or red noise) in their model fits. The correlated noise might arise from inaccurate models or unaccounted astrophysical or telescope systematics. A correct assessment of the correlated noise is important to determine true signal-to-noise ratios of a planet's physical parameters. Yet, there are no in-depth statistical studies in the literature for some of the techniques currently used (RMS-vs-bin size plot, prayer beads, and wavelet-based modeling). We subjected these correlated-noise assessment techniques to basic tests on synthetic data sets to characterize their features and limitations. Initial results indicate, for example, that, sometimes the RMS-vs-bin size plots present artifacts when the bin size is similar to the observation duration. Further, the prayer beads doesn't correctly increase the uncertainties to compensate for the lack of accuracy if there is correlated noise. We have applied these techniques to several Spitzer secondary-eclipse hot-Jupiter light curves and discuss their implications. This work was supported in part by NASA planetary atmospheres grant NNX13AF38G and Astrophysics Data Analysis Program NNX12AI69G.

  15. An Hybrid Glass/hemp Fibers Solution Frp Pipes: Technical and Economic Advantages of Hand Lay up VS Light Rtm

    NASA Astrophysics Data System (ADS)

    Cicala, G.; Cristaldi, G.; Recca, G.; Ziegmann, G.; ElSabbagh, A.; Dickert, M.

    2008-08-01

    The aim of the present research was to investigate the replacement of glass fibers with hemp fibers for applications in the piping industry. The choice of hemp fibers was mainly related to the needs, expressed by some companies operating in this sector, for cost reduction without adversely reducing the performances of the pipes. Two processing techniques, namely hand lay up and light RTM, were evaluated. The pipe selected for the study was a curved fitting (90°) flanged at both ends. The fitting must withstand an internal pressure of 10 bar and the presence of acid aqueous solutions. The original lay-up used to build the pipe is a sequence of C-glass, glass mats and glass fabric. Commercial epoxy vinyl ester resin was used as thermoset matrix. Hemp fibers mats were selected as potential substitute of glass fibers mats because of their low cost and ready availability from different commercial sources. The data obtained from the mechanical characterization were used to define a favorable design of the pipe using hemp mats as internal layer. The proposed design for the fittings allowed for a cost reduction of about 24% and a weight saving of about 23% without any drawback in terms of the final performances. The light RTM techniques was developed on purpose for the manufacturing of the curved pipe. The comparison between hand lay up and light RTM evidenced a substantial cost reduction when light RTM was used.

  16. AN HYBRID GLASS/HEMP FIBERS SOLUTION FRP PIPES: TECHNICAL AND ECONOMIC ADVANTAGES OF HAND LAY UP VS LIGHT RTM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cicala, G.; Cristaldi, G.; Recca, G.

    2008-08-28

    The aim of the present research was to investigate the replacement of glass fibers with hemp fibers for applications in the piping industry. The choice of hemp fibers was mainly related to the needs, expressed by some companies operating in this sector, for cost reduction without adversely reducing the performances of the pipes. Two processing techniques, namely hand lay up and light RTM, were evaluated. The pipe selected for the study was a curved fitting (90 deg.) flanged at both ends. The fitting must withstand an internal pressure of 10 bar and the presence of acid aqueous solutions. The originalmore » lay-up used to build the pipe is a sequence of C-glass, glass mats and glass fabric. Commercial epoxy vinyl ester resin was used as thermoset matrix.Hemp fibers mats were selected as potential substitute of glass fibers mats because of their low cost and ready availability from different commercial sources. The data obtained from the mechanical characterization were used to define a favorable design of the pipe using hemp mats as internal layer. The proposed design for the fittings allowed for a cost reduction of about 24% and a weight saving of about 23% without any drawback in terms of the final performances.The light RTM techniques was developed on purpose for the manufacturing of the curved pipe. The comparison between hand lay up and light RTM evidenced a substantial cost reduction when light RTM was used.« less

  17. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  18. Analysis Test of Understanding of Vectors with the Three-Parameter Logistic Model of Item Response Theory and Item Response Curves Technique

    ERIC Educational Resources Information Center

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-01-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…

  19. Beta/alpha continuous air monitor

    DOEpatents

    Becker, G.K.; Martz, D.E.

    1988-06-27

    A single deep layer silicon detector in combination with a microcomputer, recording both alpha and beta activity and the energy of each pulse, distinquishing energy peaks using a novel curve fitting technique to reduce the natural alpha counts in the energy region where plutonium and other transuranic alpha emitters are present, and using a novel algorithm to strip out radon daughter contribution to actual beta counts. 7 figs.

  20. Determination of time of death in forensic science via a 3-D whole body heat transfer model.

    PubMed

    Bartgis, Catherine; LeBrun, Alexander M; Ma, Ronghui; Zhu, Liang

    2016-12-01

    This study is focused on developing a whole body heat transfer model to accurately simulate temperature decay in a body postmortem. The initial steady state temperature field is simulated first and the calculated weighted average body temperature is used to determine the overall heat transfer coefficient at the skin surface, based on thermal equilibrium before death. The transient temperature field postmortem is then simulated using the same boundary condition and the temperature decay curves at several body locations are generated for a time frame of 24h. For practical purposes, curve fitting techniques are used to replace the simulations with a proposed exponential formula with an initial time delay. It is shown that the obtained temperature field in the human body agrees very well with that in the literature. The proposed exponential formula provides an excellent fit with an R 2 value larger than 0.998. For the brain and internal organ sites, the initial time delay varies from 1.6 to 2.9h, when the temperature at the measuring site does not change significantly from its original value. The curve-fitted time constant provides the measurement window after death to be between 8h and 31h if the brain site is used, while it increases 60-95% at the internal organ site. The time constant is larger when the body is exposed to colder air, since a person usually wears more clothing when it is cold outside to keep the body warm and comfortable. We conclude that a one-size-fits-all approach would lead to incorrect estimation of time of death and it is crucial to generate a database of cooling curves taking into consideration all the important factors such as body size and shape, environmental conditions, etc., therefore, leading to accurate determination of time of death. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less

  2. Investigation of activation cross-sections of deuteron induced reactions on vanadium up to 40 MeV

    NASA Astrophysics Data System (ADS)

    Tárkányi, F.; Ditrói, F.; Takács, S.; Hermanne, A.; Baba, M.; Ignatyuk, A. V.

    2011-08-01

    Experimental excitation functions for deuteron induced reactions up to 40 MeV on natural vanadium were measured with the activation method using a stacked foil irradiation technique. From high resolution gamma spectrometry cross-section data for the production of 51Cr, 48V, 48,47,46Sc and 47Ca were determined. Comparisons with the earlier published data are presented and results for values predicted by different theoretical codes are included. Thick target yields were calculated from a fit to our experimental excitation curves and compared with the earlier experimental data. Depth distribution curves used for thin layer activation (TLA) are also presented.

  3. Analysis of the multigroup model for muon tomography based threat detection

    NASA Astrophysics Data System (ADS)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-01

    We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  4. Simplified estimation of age-specific reference intervals for skewed data.

    PubMed

    Wright, E M; Royston, P

    1997-12-30

    Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.

  5. Validation and application of single breath cardiac output determinations in man

    NASA Technical Reports Server (NTRS)

    Loeppky, J. A.; Fletcher, E. R.; Myhre, L. G.; Luft, U. C.

    1986-01-01

    The results of a procedure for estimating cardiac output by a single-breath technique (Qsb), obtained in healthy males during supine rest and during exercise on a bicycle ergometer, were compared with the results on cardiac output obtained by the direct Fick method (QF). The single breath maneuver consisted of a slow exhalation to near residual volume following an inspiration somewhat deeper than normal. The Qsb calculations incorporated an equation of the CO2 dissociation curve and a 'moving spline' sequential curve-fitting technique to calculate the instantaneous R from points on the original expirogram. The resulting linear regression equation indicated a 24-percent underestimation of QF by the Qsb technique. After applying a correction, the Qsb-QF relationship was improved. A subsequent study during upright rest and exercise to 80 percent of VO2(max) in 6 subjects indicated a close linear relationship between Qsb and VO2 for all 95 values obtained, with slope and intercept close to those in published studies in which invasive cardiac output measurements were used.

  6. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  7. Toward a Micro-Scale Acoustic Direction-Finding Sensor with Integrated Electronic Readout

    DTIC Science & Technology

    2013-06-01

    measurements with curve fits . . . . . . . . . . . . . . . 20 Figure 2.10 Failure testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22...2.1 Sensor parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Table 2.2 Curve fit parameters...elastic, the quantity of interest is the elastic stiffness. In a typical nanoindentation test, the loading curve is nonlinear due to combined plastic

  8. Transonic Compressor: Program System TXCO for Data Acquisition and On-Line Reduction.

    DTIC Science & Technology

    1980-10-01

    IMONIDAYIYEARIHOUR,IMINISEC) OS16 C ............................................................... (0S17 C 0SiB C Gel dole ond line and convert the...linear curve fits SECON real intercept of linear curve fit (as from CURVE) 65 - . FLOW CHART SUBROUTINE CALIB - - - Aso C’A / oonre& *Go wSAt*irc

  9. Non-linear Multidimensional Optimization for use in Wire Scanner Fitting

    NASA Astrophysics Data System (ADS)

    Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration

    2014-03-01

    To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.

  10. A new world survey expression for cosmic ray vertical intensity vs. depth in standard rock

    NASA Technical Reports Server (NTRS)

    Crouch, M.

    1985-01-01

    The cosmic ray data on vertical intensity versus depth below 10 to the 5th power g sq cm is fitted to a 5 parameter empirical formula to give an analytical expression for interpretation of muon fluxes in underground measurements. This expression updates earlier published results and complements the more precise curves obtained by numerical integration or Monte Carlo techniques in which the fit is made to an energy spectrum at the top of the atmosphere. The expression is valid in the transitional region where neutrino induced muons begin to be important, as well as at great depths where this component becomes dominant.

  11. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  12. Edge detection and mathematic fitting for corneal surface with Matlab software.

    PubMed

    Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na

    2017-01-01

    To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.

  13. Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method

    NASA Astrophysics Data System (ADS)

    Verachtert, R.; Lombaert, G.; Degrande, G.

    2018-03-01

    This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.

  14. New microscale constitutive model of human trabecular bone based on depth sensing indentation technique.

    PubMed

    Pawlikowski, Marek; Jankowski, Krzysztof; Skalski, Konstanty

    2018-05-30

    A new constitutive model for human trabecular bone is presented in the present study. As the model is based on indentation tests performed on single trabeculae it is formulated in a microscale. The constitutive law takes into account non-linear viscoelasticity of the tissue. The elastic response is described by the hyperelastic Mooney-Rivlin model while the viscoelastic effects are considered by means of the hereditary integral in which stress depends on both time and strain. The material constants in the constitutive equation are identified on the basis of the stress relaxation tests and the indentation tests using curve-fitting procedure. The constitutive model is implemented into finite element package Abaqus ® by means of UMAT subroutine. The curve-fitting error is low and the viscoelastic behaviour of the tissue predicted by the proposed constitutive model corresponds well to the realistic response of the trabecular bone. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. The development of an intraruminal nylon bag technique using non-fistulated animals to assess the rumen degradability of dietary plant materials.

    PubMed

    Pagella, J H; Mayes, R W; Pérez-Barbería, F J; Ørskov, E R

    2018-01-01

    Although the conventional in situ ruminal degradability method is a relevant tool to describe the nutritional value of ruminant feeds, its need for rumen-fistulated animals may impose a restriction on its use when considering animal welfare issues and cost. The aim of the present work was to develop a ruminal degradability technique which avoids using surgically prepared animals. The concept was to orally dose a series of porous bags containing the test feeds at different times before slaughter, when the bags would be removed from the rumen for degradation measurement. Bags, smaller than those used in the conventional nylon bag technique, were made from woven nylon fabric, following two shape designs (rectangular flat shape, tetrahedral shape) and were fitted with one of three types of device for preventing their regurgitation. These bags were used in two experiments with individually housed non-pregnant, non-lactating sheep, as host animals for the in situ ruminal incubation of forage substrates. The bags were closed at the top edge by machine stitching and wrapped in tissue paper before oral dosing. Standard times for ruminal incubation of substrates in all of the tests were 4, 8, 16, 24, 48, 72 and 96 h before slaughter. The purpose of the first experiment was to compare the effectiveness of the three anti-regurgitation device designs, constructed from nylon cable ties ('Z-shaped', ARD1; 'double Z-shaped', ARD2; 'umbrella-shaped', ARD3), and to observe whether viable degradation curves could be generated using grass hay as the substrate. In the second experiment, three other substrates (perennial ryegrass, red clover and barley straw) were compared using flat and tetrahedral bags fitted with type ARD1 anti-regurgitation devices. Non-linear mixed-effect regression models were used to fit asymptotic exponential curves of the percentage dry matter loss of the four substrates against time of incubation in the reticulorumen, and the effect of type of anti-regurgitation device and the shape of nylon bag. All three devices were highly successful at preventing regurgitation with 93% to 100% of dosed bags being recovered in the reticulorumen at slaughter. Ruminal degradation data obtained for tested forages were in accordance with those expected from the conventional degradability technique using fistulated animals, with no significant differences in the asymptotic values of degradation curves between bag shape or anti-regurgitation device. The results of this research demonstrate the potential for using a small bag technique with intact sheep to characterise the in situ ruminal degradability of roughages.

  16. Experimental characterization of wingtip vortices in the near field using smoke flow visualizations

    NASA Astrophysics Data System (ADS)

    Serrano-Aguilera, J. J.; García-Ortiz, J. Hermenegildo; Gallardo-Claros, A.; Parras, L.; del Pino, C.

    2016-08-01

    In order to predict the axial development of the wingtip vortices strength, an accurate theoretical model is required. Several experimental techniques have been used to that end, e.g. PIV or hot-wire anemometry, but they imply a significant cost and effort. For this reason, we have performed experiments using the smoke-wire technique to visualize smoke streaks in six planes perpendicular to the main stream flow direction. Using this visualization technique, we obtained quantitative information regarding the vortex velocity field by means of Batchelor's model for two chord-based Reynolds numbers, Re_c=3.33× 10^4 and 10^5. Therefore, this theoretical vortex model has been introduced in the integration of ordinary differential equations which describe the temporal evolution of streak lines as function of two parameters: the swirl number, S, and the virtual axial origin, overline{z_0}. We have applied two different procedures to minimize the distance between experimental and theoretical flow patterns: individual curve fitting at six different control planes in the streamwise direction and the global curve fitting which corresponds to all the control planes simultaneously. Both sets of results have been compared with those provided by del Pino et al. (Phys Fluids 23(013):602, 2011b. doi: 10.1063/1.3537791), finding good agreement. Finally, we have observed a weak influence of the Reynolds number on the values S and overline{z_0} at low-to-moderate Re_c. This experimental technique is proposed as a low cost alternative to characterize wingtip vortices based on flow visualizations.

  17. Materials and Modulators for 3D Displays

    DTIC Science & Technology

    2002-08-01

    1243 nm. 0, 180 and 360 deg. in this plot correspond to parallel polarization. The dashed curve is a cos2(θ) fit to the data with a constant value...dwell time (solid bold curve ), 10 µs dwell time (dashed bold curve ) and static case (thin dashed curve ). 26 Figure. 20. Schematics of free-space...photon. The two peaks in the two photon spectrum can be fit by two Lorentzian curves . These spectra indicate that in the rhodamine B molecule the

  18. A New Approach for Obtaining Cosmological Constraints from Type Ia Supernovae using Approximate Bayesian Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Elise; Wolf, Rachel; Sako, Masao

    2016-11-09

    Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less

  19. Dynamic Analysis of Sounding Rocket Pneumatic System Revision

    NASA Technical Reports Server (NTRS)

    Armen, Jerald

    2010-01-01

    The recent fusion of decades of advancements in mathematical models, numerical algorithms and curve fitting techniques marked the beginning of a new era in the science of simulation. It is becoming indispensable to the study of rockets and aerospace analysis. In pneumatic system, which is the main focus of this paper, particular emphasis will be placed on the efforts of compressible flow in Attitude Control System of sounding rocket.

  20. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    USGS Publications Warehouse

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  1. Probing Cytoskeletal Structures by Coupling Optical Superresolution and AFM Techniques for a Correlative Approach

    PubMed Central

    Chacko, Jenu Varghese; Zanacchi, Francesca Cella; Diaspro, Alberto

    2013-01-01

    In this article, we describe and show the application of some of the most advanced fluorescence superresolution techniques, STED AFM and STORM AFM microscopy towards imaging of cytoskeletal structures, such as microtubule filaments. Mechanical and structural properties can play a relevant role in the investigation of cytoskeletal structures of interest, such as microtubules, that provide support to the cell structure. In fact, the mechanical properties, such as the local stiffness and the elasticity, can be investigated by AFM force spectroscopy with tens of nanometers resolution. Force curves can be analyzed in order to obtain the local elasticity (and the Young's modulus calculation by fitting the force curves from every pixel of interest), and the combination with STED/STORM microscopy integrates the measurement with high specificity and yields superresolution structural information. This hybrid modality of superresolution-AFM working is a clear example of correlative multimodal microscopy. PMID:24027190

  2. Accuracy and efficiency of published film dosimetry techniques using a flat-bed scanner and EBT3 film.

    PubMed

    Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T

    2018-03-01

    Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.

  3. An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu

    We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe givesmore » a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.« less

  4. A Survey of Xenon Ion Sputter Yield Data and Fits Relevant to Electric Propulsion Spacecraft Integration

    NASA Technical Reports Server (NTRS)

    Yim, John T.

    2017-01-01

    A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.

  5. Using statistical correlation to compare geomagnetic data sets

    NASA Astrophysics Data System (ADS)

    Stanton, T.

    2009-04-01

    The major features of data curves are often matched, to a first order, by bump and wiggle matching to arrive at an offset between data sets. This poster describes a simple statistical correlation program that has proved useful during this stage by determining the optimal correlation between geomagnetic curves using a variety of fixed and floating windows. Its utility is suggested by the fact that it is simple to run, yet generates meaningful data comparisons, often when data noise precludes the obvious matching of curve features. Data sets can be scaled, smoothed, normalised and standardised, before all possible correlations are carried out between selected overlapping portions of each curve. Best-fit offset curves can then be displayed graphically. The program was used to cross-correlate directional and palaeointensity data from Holocene lake sediments (Stanton et al., submitted) and Holocene lava flows. Some example curve matches are shown, including some that illustrate the potential of this technique when examining particularly sparse data sets. Stanton, T., Snowball, I., Zillén, L. and Wastegård, S., submitted. Detecting potential errors in varve chronology and 14C ages using palaeosecular variation curves, lead pollution history and statistical correlation. Quaternary Geochronology.

  6. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  7. Improving the Depth-Time Fit of Holocene Climate Proxy Measures by Increasing Coherence with a Reference Time-Series

    NASA Astrophysics Data System (ADS)

    Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.

    2007-12-01

    An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.

  8. The behavioral economics of drug self-administration: A review and new analytical approach for within-session procedures

    PubMed Central

    Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary

    2012-01-01

    Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021

  9. Investigation of Light-Emitting Diode (LED) Point Light Source Color Visibility against Complex Multicolored Backgrounds

    DTIC Science & Technology

    2017-11-01

    sent from light-emitting diodes (LEDs) of 5 colors ( green , red, white, amber, and blue). Experiment 1 involved controlled laboratory measurements of...A-4 Red LED calibration curves and quadratic curve fits with R2 values . 37 Fig. A-5 Green LED calibration curves and quadratic curve fits with R2...36 Table A-4 Red LED calibration measurements ................................................... 36 Table A-5 Green LED

  10. Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece

    DOEpatents

    Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.

    2008-10-14

    Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.

  11. Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.

    PubMed

    Roman, A; Ahmed, K; Challacombe, B

    2016-05-01

    Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  12. Non-linear Multidimensional Optimization for use in Wire Scanner Fitting

    NASA Astrophysics Data System (ADS)

    Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration

    2013-10-01

    To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.

  13. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  14. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.

    PubMed

    López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J

    2015-04-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    2016-12-08

    In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less

  16. Student Support for Research in Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.

    1999-01-01

    Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.

  17. Digital Model of Railway Electric Traction Lines

    NASA Astrophysics Data System (ADS)

    Garg, Rachana; Mahajan, Priya; Kumar, Parmod

    2017-08-01

    The characteristic impedance and propagation constant define the behavior of signal propagation over the transmission lines. The digital model for railway traction lines which includes railway tracks is developed, using curve fitting technique in MATLAB. The sensitivity of this model has been computed with respect to frequency. The digital sensitivity values are compared with the values of analog sensitivity. The developed model is useful for digital protection, integrated operation, control and planning of the system.

  18. A New Catalog of Contact Binary Stars from ROTSE-I Sky Patrols

    NASA Astrophysics Data System (ADS)

    Gettel, S. J.; McKay, T. A.; Geske, M. T.

    2005-05-01

    Over 65,000 variable stars have been detected in the data from the ROTSE-I Sky Patrols. Using period-color and light curve selection techniques, about 5000 objects have been identified as contact binaries. This selection is tested for completeness against EW objects in the GCVS. By utilizing infrared color data from 2MASS, we fit a period-color-luminosity relation to these stars and estimate their distances.

  19. Pattern of Change in Prolonged Exposure and Cognitive-Processing Therapy for Female Rape Victims With Posttraumatic Stress Disorder

    PubMed Central

    Nishith, Pallavi; Resick, Patricia A.; Griffin, Michael G.

    2010-01-01

    Curve estimation techniques were used to identify the pattern of therapeutic change in female rape victims with posttraumatic stress disorder (PTSD). Within-session data on the Posttraumatic Stress Disorder Symptom Scale were obtained, in alternate therapy sessions, on 171 women. The final sample of treatment completers included 54 prolonged exposure (PE) and 54 cognitive-processing therapy (CPT) completers. For both PE and CPT, a quadratic function provided the best fit for the total PTSD, reexperiencing, and arousal scores. However, a difference in the line of best fit was observed for the avoidance symptoms. Although a quadratic function still provided a better fit for the PE avoidance, a linear function was more parsimonious in explaining the CPT avoidance variance. Implications of the findings are discussed. PMID:12182271

  20. Biological growth functions describe published site index curves for Lake States timber species.

    Treesearch

    Allen L. Lundgren; William A. Dolid

    1970-01-01

    Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...

  1. How exponential are FREDs?

    NASA Astrophysics Data System (ADS)

    Schaefer, Bradley E.; Dyson, Samuel E.

    1996-08-01

    A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.

  2. Learning curves in highly skilled chess players: a test of the generality of the power law of practice.

    PubMed

    Howard, Robert W

    2014-09-01

    The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Global search in photoelectron diffraction structure determination using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Viana, M. L.; Díez Muiño, R.; Soares, E. A.; Van Hove, M. A.; de Carvalho, V. E.

    2007-11-01

    Photoelectron diffraction (PED) is an experimental technique widely used to perform structural determinations of solid surfaces. Similarly to low-energy electron diffraction (LEED), structural determination by PED requires a fitting procedure between the experimental intensities and theoretical results obtained through simulations. Multiple scattering has been shown to be an effective approach for making such simulations. The quality of the fit can be quantified through the so-called R-factor. Therefore, the fitting procedure is, indeed, an R-factor minimization problem. However, the topography of the R-factor as a function of the structural and non-structural surface parameters to be determined is complex, and the task of finding the global minimum becomes tough, particularly for complex structures in which many parameters have to be adjusted. In this work we investigate the applicability of the genetic algorithm (GA) global optimization method to this problem. The GA is based on the evolution of species, and makes use of concepts such as crossover, elitism and mutation to perform the search. We show results of its application in the structural determination of three different systems: the Cu(111) surface through the use of energy-scanned experimental curves; the Ag(110)-c(2 × 2)-Sb system, in which a theory-theory fit was performed; and the Ag(111) surface for which angle-scanned experimental curves were used. We conclude that the GA is a highly efficient method to search for global minima in the optimization of the parameters that best fit the experimental photoelectron diffraction intensities to the theoretical ones.

  4. Uranium, radium and thorium in soils with high-resolution gamma spectroscopy, MCNP-generated efficiencies, and VRF non-linear full-spectrum nuclide shape fitting

    NASA Astrophysics Data System (ADS)

    Metzger, Robert; Riper, Kenneth Van; Lasche, George

    2017-09-01

    A new method for analysis of uranium and radium in soils by gamma spectroscopy has been developed using VRF ("Visual RobFit") which, unlike traditional peak-search techniques, fits full-spectrum nuclide shapes with non-linear least-squares minimization of the chi-squared statistic. Gamma efficiency curves were developed for a 500 mL Marinelli beaker geometry as a function of soil density using MCNP. Collected spectra were then analyzed using the MCNP-generated efficiency curves and VRF to deconvolute the 90 keV peak complex of uranium and obtain 238U and 235U activities. 226Ra activity was determined either from the radon daughters if the equilibrium status is known, or directly from the deconvoluted 186 keV line. 228Ra values were determined from the 228Ac daughter activity. The method was validated by analysis of radium, thorium and uranium soil standards and by inter-comparison with other methods for radium in soils. The method allows for a rapid determination of whether a sample has been impacted by a man-made activity by comparison of the uranium and radium concentrations to those that would be expected from a natural equilibrium state.

  5. Direct Simulation of Magnetic Resonance Relaxation Rates and Line Shapes from Molecular Trajectories

    PubMed Central

    Rangel, David P.; Baveye, Philippe C.; Robinson, Bruce H.

    2012-01-01

    We simulate spin relaxation processes, which may be measured by either continuous wave or pulsed magnetic resonance techniques, using trajectory-based simulation methodologies. The spin–lattice relaxation rates are extracted numerically from the relaxation simulations. The rates obtained from the numerical fitting of the relaxation curves are compared to those obtained by direct simulation from the relaxation Bloch–Wangsness–Abragam– Redfield theory (BWART). We have restricted our study to anisotropic rigid-body rotational processes, and to the chemical shift anisotropy (CSA) and a single spin–spin dipolar (END) coupling mechanisms. Examples using electron paramagnetic resonance (EPR) nitroxide and nuclear magnetic resonance (NMR) deuterium quadrupolar systems are provided. The objective is to compare those rates obtained by numerical simulations with the rates obtained by BWART. There is excellent agreement between the simulated and BWART rates for a Hamiltonian describing a single spin (an electron) interacting with the bath through the chemical shift anisotropy (CSA) mechanism undergoing anisotropic rotational diffusion. In contrast, when the Hamiltonian contains both the chemical shift anisotropy (CSA) and the spin–spin dipolar (END) mechanisms, the decay rate of a single exponential fit of the simulated spin–lattice relaxation rate is up to a factor of 0.2 smaller than that predicted by BWART. When the relaxation curves are fit to a double exponential, the slow and fast rates extracted from the decay curves bound the BWART prediction. An extended BWART theory, in the literature, includes the need for multiple relaxation rates and indicates that the multiexponential decay is due to the combined effects of direct and cross-relaxation mechanisms. PMID:22540276

  6. The water retention curve and relative permeability for gas production from hydrate-bearing sediments: pore-network model simulation

    NASA Astrophysics Data System (ADS)

    Mahabadi, Nariman; Dai, Sheng; Seol, Yongkoo; Sup Yun, Tae; Jang, Jaewon

    2016-08-01

    The water retention curve and relative permeability are critical to predict gas and water production from hydrate-bearing sediments. However, values for key parameters that characterize gas and water flows during hydrate dissociation have not been identified due to experimental challenges. This study utilizes the combined techniques of micro-focus X-ray computed tomography (CT) and pore-network model simulation to identify proper values for those key parameters, such as gas entry pressure, residual water saturation, and curve fitting values. Hydrates with various saturation and morphology are realized in the pore-network that was extracted from micron-resolution CT images of sediments recovered from the hydrate deposit at the Mallik site, and then the processes of gas invasion, hydrate dissociation, gas expansion, and gas and water permeability are simulated. Results show that greater hydrate saturation in sediments lead to higher gas entry pressure, higher residual water saturation, and steeper water retention curve. An increase in hydrate saturation decreases gas permeability but has marginal effects on water permeability in sediments with uniformly distributed hydrate. Hydrate morphology has more significant impacts than hydrate saturation on relative permeability. Sediments with heterogeneously distributed hydrate tend to result in lower residual water saturation and higher gas and water permeability. In this sense, the Brooks-Corey model that uses two fitting parameters individually for gas and water permeability properly capture the effect of hydrate saturation and morphology on gas and water flows in hydrate-bearing sediments.

  7. Dust in the small Magellanic Cloud. 2: Dust models from interstellar polarization and extinction data

    NASA Technical Reports Server (NTRS)

    Rodrigues, C. V.; Magalhaes, A. M.; Coyne, G. V.

    1995-01-01

    We study the dust in the Small Magellanic Cloud using our polarization and extinction data (Paper 1) and existing dust models. The data suggest that the monotonic SMC extinction curve is related to values of lambda(sub max), the wavelength of maximum polarization, which are on the average smaller than the mean for the Galaxy. On the other hand, AZV 456, a star with an extinction similar to that for the Galaxy, shows a value of lambda(sub max) similar to the mean for the Galaxy. We discuss simultaneous dust model fits to extinction and polarization. Fits to the wavelength dependent polarization data are possible for stars with small lambda(sub max). In general, they imply dust size distributions which are narrower and have smaller mean sizes compared to typical size distributions for the Galaxy. However, stars with lambda(sub max) close to the Galactic norm, which also have a narrower polarization curve, cannot be fit adequately. This holds true for all of the dust models considered. The best fits to the extinction curves are obtained with a power law size distribution by assuming that the cylindrical and spherical silicate grains have a volume distribution which is continuous from the smaller spheres to the larger cylinders. The size distribution for the cylinders is taken from the fit to the polarization. The 'typical', monotonic SMC extinction curve can be fit well with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grain. However, amorphous carbon and silicate grains also fit the data well. AZV456, which has an extinction curve similar to that for the Galaxy, has a UV bump which is too blue to be fit by spherical graphite grains.

  8. Research on Standard and Automatic Judgment of Press-fit Curve of Locomotive Wheel-set Based on AAR Standard

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu

    2018-03-01

    In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.

  9. Machine-learned Identification of RR Lyrae Stars from Sparse, Multi-band Data: The PS1 Sample

    NASA Astrophysics Data System (ADS)

    Sesar, Branimir; Hernitschek, Nina; Mitrović, Sandra; Ivezić, Željko; Rix, Hans-Walter; Cohen, Judith G.; Bernard, Edouard J.; Grebel, Eva K.; Martin, Nicolas F.; Schlafly, Edward F.; Burgett, William S.; Draper, Peter W.; Flewelling, Heather; Kaiser, Nick; Kudritzki, Rolf P.; Magnier, Eugene A.; Metcalfe, Nigel; Tonry, John L.; Waters, Christopher

    2017-05-01

    RR Lyrae stars may be the best practical tracers of Galactic halo (sub-)structure and kinematics. The PanSTARRS1 (PS1) 3π survey offers multi-band, multi-epoch, precise photometry across much of the sky, but a robust identification of RR Lyrae stars in this data set poses a challenge, given PS1's sparse, asynchronous multi-band light curves (≲ 12 epochs in each of five bands, taken over a 4.5 year period). We present a novel template fitting technique that uses well-defined and physically motivated multi-band light curves of RR Lyrae stars, and demonstrate that we get accurate period estimates, precise to 2 s in > 80 % of cases. We augment these light-curve fits with other features from photometric time-series and provide them to progressively more detailed machine-learned classification models. From these models, we are able to select the widest (three-fourths of the sky) and deepest (reaching 120 kpc) sample of RR Lyrae stars to date. The PS1 sample of ˜45,000 RRab stars is pure (90%) and complete (80% at 80 kpc) at high galactic latitudes. It also provides distances that are precise to 3%, measured with newly derived period-luminosity relations for optical/near-infrared PS1 bands. With the addition of proper motions from Gaia and radial velocity measurements from multi-object spectroscopic surveys, we expect the PS1 sample of RR Lyrae stars to become the premier source for studying the structure, kinematics, and the gravitational potential of the Galactic halo. The techniques presented in this study should translate well to other sparse, multi-band data sets, such as those produced by the Dark Energy Survey and the upcoming Large Synoptic Survey Telescope Galactic plane sub-survey.

  10. Statistical aspects of modeling the labor curve.

    PubMed

    Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M

    2015-06-01

    In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. On the reduction of occultation light curves. [stellar occultations by planets

    NASA Technical Reports Server (NTRS)

    Wasserman, L.; Veverka, J.

    1973-01-01

    The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.

  12. JMOSFET: A MOSFET parameter extractor with geometry-dependent terms

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Moore, B. T.

    1985-01-01

    The parameters from metal-oxide-silicon field-effect transistors (MOSFETs) that are included on the Combined Release and Radiation Effects Satellite (CRRES) test chips need to be extracted to have a simple but comprehensive method that can be used in wafer acceptance, and to have a method that is sufficiently accurate that it can be used in integrated circuits. A set of MOSFET parameter extraction procedures that are directly linked to the MOSFET model equations and that facilitate the use of simple, direct curve-fitting techniques are developed. In addition, the major physical effects that affect MOSFET operation in the linear and saturation regions of operation for devices fabricated in 1.2 to 3.0 mm CMOS technology are included. The fitting procedures were designed to establish single values for such parameters as threshold voltage and transconductance and to provide for slope matching between the linear and saturation regions of the MOSFET output current-voltage curves. Four different sizes of transistors that cover a rectangular-shaped region of the channel length-width plane are analyzed.

  13. An Apparatus for Sizing Particulate Matter in Solid Rocket Motors.

    DTIC Science & Technology

    1984-06-01

    accurately measured. A curve for sizing polydispersions was presented which was used by Cramer and Hansen [Refs. 2, 12]. Two phase flow losses are often...Concentration...... 54 18. 5 Micron Polystyrene, Curve Fit .......... ... 55 19. 5 Micron Polystyrene, Two Angle Method ........ .56.... 20. 10 Micron...Polystyrene, Curve Fit .. ........ 57....[57 21. 10 Micron Polystyrene, Two Angle Method .. ....... .58 . . .6_ *22. 20J Mizron P3iystvrene Cu. .Fi

  14. Activation cross-sections of proton induced reactions on vanadium in the 37-65 MeV energy range

    NASA Astrophysics Data System (ADS)

    Ditrói, F.; Tárkányi, F.; Takács, S.; Hermanne, A.

    2016-08-01

    Experimental excitation functions for proton induced reactions on natural vanadium in the 37-65 MeV energy range were measured with the activation method using a stacked foil irradiation technique. By using high resolution gamma spectrometry cross-section data for the production of 51,48Cr, 48V, 48,47,46,44m,44g,43Sc and 43,42K were determined. Comparisons with the earlier published data are presented and results predicted by different theoretical codes (EMPIRE and TALYS) are included. Thick target yields were calculated from a fit to our experimental excitation curves and compared with the earlier experimental yield data. Depth distribution curves to be used for thin layer activation (TLA) are also presented.

  15. Potential energy curve of the D(3)Π1u state in rubidium dimer from spectroscopic measurements

    NASA Astrophysics Data System (ADS)

    Jastrzebski, W.; Kowalczyk, P.

    2016-12-01

    The DΠ1u ← X g+1Σ band system in the Rb852 and 85Rb87Rb molecules has been investigated by the polarization labelling spectroscopy technique. The total of 2266 lines in this system were measured with an accuracy better than 0.1 cm-1. The resulting energies of the excited state levels (v = 0 - 50, J = 25 - 173) have been fitted to a Dunham polynomial expansion and directly to a numerical potential, providing the first experimental determination of the potential energy curve for the DΠ1u state. A good agreement is found between the experimental potential and those obtained from the most recent theoretical calculations.

  16. Applications of data compression techniques in modal analysis for on-orbit system identification

    NASA Technical Reports Server (NTRS)

    Carlin, Robert A.; Saggio, Frank; Garcia, Ephrahim

    1992-01-01

    Data compression techniques have been investigated for use with modal analysis applications. A redundancy-reduction algorithm was used to compress frequency response functions (FRFs) in order to reduce the amount of disk space necessary to store the data and/or save time in processing it. Tests were performed for both single- and multiple-degree-of-freedom (SDOF and MDOF, respectively) systems, with varying amounts of noise. Analysis was done on both the compressed and uncompressed FRFs using an SDOF Nyquist curve fit as well as the Eigensystem Realization Algorithm. Significant savings were realized with minimal errors incurred by the compression process.

  17. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  18. Use of radar QPE for the derivation of Intensity-Duration-Frequency curves in a range of climatic regimes

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2015-12-01

    Intensity-Duration-Frequency (IDF) curves are widely used in flood risk management because they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. Weather radars provide distributed rainfall estimates with high spatial and temporal resolutions and overcome the scarce representativeness of point-based rainfall for regions characterized by large gradients in rainfall climatology. This work explores the use of radar quantitative precipitation estimation (QPE) for the identification of IDF curves over a region with steep climatic transitions (Israel) using a unique radar data record (23 yr) and combined physical and empirical adjustment of the radar data. IDF relationships were derived by fitting a generalized extreme value distribution to the annual maximum series for durations of 20 min, 1 h and 4 h. Arid, semi-arid and Mediterranean climates were explored using 14 study cases. IDF curves derived from the study rain gauges were compared to those derived from radar and from nearby rain gauges characterized by similar climatology, taking into account the uncertainty linked with the fitting technique. Radar annual maxima and IDF curves were generally overestimated but in 70% of the cases (60% for a 100 yr return period), they lay within the rain gauge IDF confidence intervals. Overestimation tended to increase with return period, and this effect was enhanced in arid climates. This was mainly associated with radar estimation uncertainty, even if other effects, such as rain gauge temporal resolution, cannot be neglected. Climatological classification remained meaningful for the analysis of rainfall extremes and radar was able to discern climatology from rainfall frequency analysis.

  19. Space-Based Observation Technology

    DTIC Science & Technology

    2000-10-01

    Conan, V. Michau, and S. Salem . Regularized multiframe myopic deconvolution from wavefront sensing. In Propagation through the Atmosphere III...specified false alarm rate PFA . Proceeding with curving fitting, one obtains a best-fit curve “10.1y14.2 - 0.2” as the detector for the target

  20. Reference Curves for Field Tests of Musculoskeletal Fitness in U.S. Children and Adolescents: The 2012 NHANES National Youth Fitness Survey.

    PubMed

    Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C

    2017-08-01

    Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over time.

  1. Mild angle early onset idiopathic scoliosis children avoid progression under FITS method (Functional Individual Therapy of Scoliosis).

    PubMed

    Białek, Marianna

    2015-05-01

    Physiotherapy for stabilization of idiopathic scoliosis angle in growing children remains controversial. Notably, little data on effectiveness of physiotherapy in children with Early Onset Idiopathic Scoliosis (EOIS) has been published.The aim of this study was to check results of FITS physiotherapy in a group of children with EOIS.The charts of the patients archived in a prospectively collected database were retrospectively reviewed. The inclusion criteria were:diagnosis of EOIS based on spine radiography, age below 10 years, both girls and boys, Cobb angle between 118 and 308, Risser zero, FITS therapy, no other treatment (bracing), and a follow-up at least 2 years from the initiation of the treatment. The criterion for curve progression were as follows: the Cobb angle increase of 68 or more, for curve stabilization; the Cobb angle was 58 comparing to the initial radiograph,for curve correction; and the Cobb angle decrease of 68 or more at the final follow-up radiograph.There were 41 children with EOIS, 36 girls and 5 boys, mean age 7.71.3 years (range 4 to 9 years) who started FITS therapy. The curve pattern was single thoracic (5 children), single thoracolumbar (22 children) or double thoracic/thoracolumbar (14 children), totally 55 structural curvatures. The minimum follow-up was 2 years after initiation of the FITS treatment, maximum was 16 years, mean 4.8 years). At follow-up the mean age was 12.53.4 years. Out of 41 children, 10 passed pubertal growth spurt at the final follow-up and 31 were still immature and continued FITS therapy. Out of 41 children, 27 improved, 13 were stable, and one progressed. Out of 55 structural curves, 32 improved, 22 were stable and one progressed. For the 55 structural curves, the Cobb angle significantly decreased from 18.085.48 at first assessment to 12.586.38 at last evaluation,p<0.0001, paired t-test. The angle of trunk rotation decreased significantly from 4.782.98 to 3.282.58 at last evaluation, p<0.0001,paired t-test.FITS physiotherapy was effective in preventing curve progression in children with EOIS. Final postpubertal follow-up data is needed.

  2. [Growth standardized values and curves based on weight, length/height and head circumference for Chinese children under 7 years of age].

    PubMed

    Li, Hui

    2009-03-01

    To construct the growth standardized data and curves based on weight, length/height, head circumference for Chinese children under 7 years of age. Random cluster sampling was used. The fourth national growth survey of children under 7 years in the nine cities (Beijing, Harbin, Xi'an, Shanghai, Nanjing, Wuhan, Fuzhou, Guangzhou and Kunming) of China was performed in 2005 and from this survey, data of 69 760 urban healthy boys and girls were used to set up the database for weight-for-age, height-for-age (length was measured for children under 3 years) and head circumference-for-age. Anthropometric data were ascribed to rigorous methods of data collection and standardized procedures across study sites. LMS method based on BOX-COX normal transformation and cubic splines smoothing technique was chosen for fitting the raw data according to study design and data features, and standardized values of any percentile and standard deviation were obtained by the special formulation of L, M and S parameters. Length-for-age and height-for-age standards were constructed by fitting the same model but the final curves reflected the 0.7 cm average difference between these two measurements. A set of systematic diagnostic tools was used to detect possible biases in estimated percentiles or standard deviation curves, including chi2 test, which was used for reference to evaluate to the goodness of fit. The 3rd, 10th, 25th, 50th, 75th, 90th, 97th smoothed percentiles and -3, -2, -1, 0, +1, +2, +3 SD values and curves of weight-for-age, length/height-for-age and head circumference-for-age for boys and girls aged 0-7 years were made out respectively. The Chinese child growth charts was slightly higher than the WHO child growth standards. The newly established growth charts represented the growth level of healthy and well-nourished Chinese children. The sample size was very large and national, the data were high-quality and the smoothing method was internationally accepted. The new Chinese growth charts are recommended as the Chinese child growth standards in 21st century used in China.

  3. A Fast Smoothing Algorithm for Post-Processing of Surface Reflectance Spectra Retrieved from Airborne Imaging Spectrometer Data

    PubMed Central

    Gao, Bo-Cai; Liu, Ming

    2013-01-01

    Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022

  4. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data.

    PubMed

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.

  5. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data

    PubMed Central

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378

  6. Dust in the Small Magellanic Cloud

    NASA Technical Reports Server (NTRS)

    Rodrigues, C. V.; Coyne, G. V.; Magalhaes, A. M.

    1995-01-01

    We discuss simultaneous dust model fits to our extinction and polarization data for the Small Magellanic Cloud (SMC) using existing dust models. Dust model fits to the wavelength dependent polarization are possible for stars with small lambda(sub max). They generally imply size distributions which are narrower and have smaller average sizes compared to those in the Galaxy. The best fits for the extinction curves are obtained with a power law size distribution. The typical, monotonic SMC extinction curve can be well fit with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grains. Amorphous carbon and silicate grains also fit the data well.

  7. Patching C2n Time Series Data Holes using Principal Component Analysis

    DTIC Science & Technology

    2007-01-01

    characteristic local scale exponent , regardless of dilation of the length examined. THE HURST PARAMETER There are a slew of methods13 available to...fractal dimension D0, which characterises the roughness of the data, and the Hurst parameter, H , which is a measure of the long range dependence (LRD...estimate H . For simplicity, we have opted to use the well known Hurst –Mandelbrot R/S technique, which is also the most elementary. The fitting curve

  8. Universal approach to analysis of cavitation and liquid-impingement erosion data

    NASA Technical Reports Server (NTRS)

    Rao, P. V.; Young, S. G.

    1982-01-01

    Cavitation erosion experimental data was analyzed by using normalization and curve-fitting techniques. Data were taken from experiments on several materials tested in both a rotating disk device and a magnetostriction apparatus. Cumulative average volume loss rate and time data were normalized relative to the peak erosion rate and the time to peak erosion rate, respectively. From this process a universal approach was derived that can include data on specific materials from different test devices for liquid impingement and cavitation erosion studies.

  9. Phase analysis for three-dimensional surface reconstruction of apples using structured-illumination reflectance imaging

    NASA Astrophysics Data System (ADS)

    Lu, Yuzhen; Lu, Renfu

    2017-05-01

    Three-dimensional (3-D) shape information is valuable for fruit quality evaluation. This study was aimed at developing phase analysis techniques for reconstruction of the 3-D surface of fruit from the pattern images acquired by a structuredillumination reflectance imaging (SIRI) system. Phase-shifted sinusoidal patterns, distorted by the fruit geometry, were acquired and processed through phase demodulation, phase unwrapping and other post-processing procedures to obtain phase difference maps relative to the phase of a reference plane. The phase maps were then transformed into height profiles and 3-D shapes in a world coordinate system based on phase-to-height and in-plane calibrations. A reference plane-based approach, coupled with the curve fitting technique using polynomials of order 3 or higher, was utilized for phase-to-height calibrations, which achieved superior accuracies with the root-mean-squared errors (RMSEs) of 0.027- 0.033 mm for a height measurement range of 0-91 mm. The 3rd-order polynomial curve fitting technique was further tested on two reference blocks with known heights, resulting in relative errors of 3.75% and 4.16%. In-plane calibrations were performed by solving a linear system formed by a number of control points in a calibration object, which yielded a RMSE of 0.311 mm. Tests of the calibrated system for reconstructing the surface of apple samples showed that surface concavities (i.e., stem/calyx regions) could be easily discriminated from bruises from the phase difference maps, reconstructed height profiles and the 3-D shape of apples. This study has laid a foundation for using SIRI for 3-D shape measurement, and thus expanded the capability of the technique for quality evaluation of horticultural products. Further research is needed to utilize the phase analysis techniques for stem/calyx detection of apples, and optimize the phase demodulation and unwrapping algorithms for faster and more reliable detection.

  10. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    PubMed

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.

  11. Milky Way Kinematics. II. A Uniform Inner Galaxy H I Terminal Velocity Curve

    NASA Astrophysics Data System (ADS)

    McClure-Griffiths, N. M.; Dickey, John M.

    2016-11-01

    Using atomic hydrogen (H I) data from the VLA Galactic Plane Survey, we measure the H I terminal velocity as a function of longitude for the first quadrant of the Milky Way. We use these data, together with our previous work on the fourth Galactic quadrant, to produce a densely sampled, uniformly measured, rotation curve of the northern and southern Milky Way between 3 {kpc}\\lt R\\lt 8 {kpc}. We determine a new joint rotation curve fit for the first and fourth quadrants, which is consistent with the fit we published in McClure-Griffiths & Dickey and can be used for estimating kinematic distances interior to the solar circle. Structure in the rotation curves is now exquisitely well defined, showing significant velocity structure on lengths of ˜200 pc, which is much greater than the spatial resolution of the rotation curve. Furthermore, the shape of the rotation curves for the first and fourth quadrants, even after subtraction of a circular rotation fit shows a surprising degree of correlation with a roughly sinusoidal pattern between 4.2\\lt R\\lt 7 kpc.

  12. Performance of the score systems Acute Physiology and Chronic Health Evaluation II and III at an interdisciplinary intensive care unit, after customization

    PubMed Central

    Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph

    2001-01-01

    Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223

  13. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    ERIC Educational Resources Information Center

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  14. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  15. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346

  16. Fitting milk production curves through nonlinear mixed models.

    PubMed

    Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica

    2017-05-01

    The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.

  17. Motion patterns in acupuncture needle manipulation.

    PubMed

    Seo, Yoonjeong; Lee, In-Seon; Jung, Won-Mo; Ryu, Ho-Sun; Lim, Jinwoong; Ryu, Yeon-Hee; Kang, Jung-Won; Chae, Younbyoung

    2014-10-01

    In clinical practice, acupuncture manipulation is highly individualised for each practitioner. Before we establish a standard for acupuncture manipulation, it is important to understand completely the manifestations of acupuncture manipulation in the actual clinic. To examine motion patterns during acupuncture manipulation, we generated a fitted model of practitioners' motion patterns and evaluated their consistencies in acupuncture manipulation. Using a motion sensor, we obtained real-time motion data from eight experienced practitioners while they conducted acupuncture manipulation using their own techniques. We calculated the average amplitude and duration of a sampled motion unit for each practitioner and, after normalisation, we generated a true regression curve of motion patterns for each practitioner using a generalised additive mixed modelling (GAMM). We observed significant differences in rotation amplitude and duration in motion samples among practitioners. GAMM showed marked variations in average regression curves of motion patterns among practitioners but there was strong consistency in motion parameters for individual practitioners. The fitted regression model showed that the true regression curve accounted for an average of 50.2% of variance in the motion pattern for each practitioner. Our findings suggest that there is great inter-individual variability between practitioners, but remarkable intra-individual consistency within each practitioner. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Determining Tension-Compression Nonlinear Mechanical Properties of Articular Cartilage from Indentation Testing.

    PubMed

    Chen, Xingyu; Zhou, Yilu; Wang, Liyun; Santare, Michael H; Wan, Leo Q; Lu, X Lucas

    2016-04-01

    The indentation test is widely used to determine the in situ biomechanical properties of articular cartilage. The mechanical parameters estimated from the test depend on the constitutive model adopted to analyze the data. Similar to most connective tissues, the solid matrix of cartilage displays different mechanical properties under tension and compression, termed tension-compression nonlinearity (TCN). In this study, cartilage was modeled as a porous elastic material with either a conewise linear elastic matrix with cubic symmetry or a solid matrix reinforced by a continuous fiber distribution. Both models are commonly used to describe the TCN of cartilage. The roles of each mechanical property in determining the indentation response of cartilage were identified by finite element simulation. Under constant loading, the equilibrium deformation of cartilage is mainly dependent on the compressive modulus, while the initial transient creep behavior is largely regulated by the tensile stiffness. More importantly, altering the permeability does not change the shape of the indentation creep curves, but introduces a parallel shift along the horizontal direction on a logarithmic time scale. Based on these findings, a highly efficient curve-fitting algorithm was designed, which can uniquely determine the three major mechanical properties of cartilage (compressive modulus, tensile modulus, and permeability) from a single indentation test. The new technique was tested on adult bovine knee cartilage and compared with results from the classic biphasic linear elastic curve-fitting program.

  19. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware.

    PubMed

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  20. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware

    NASA Astrophysics Data System (ADS)

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  1. Constraints On the Emission Geometries and Spin Evolution Of Gamma-Ray Millisecond Pulsars

    NASA Technical Reports Server (NTRS)

    Johnson, T. J.; Venter, C.; Harding, A. K.; Guillemot, L.; Smith, D. A.; Kramer, M.; Celik, O.; den Hartog, P. R.; Ferrara, E. C.; Hou, X.; hide

    2014-01-01

    Millisecond pulsars (MSPs) are a growing class of gamma-ray emitters. Pulsed gamma-ray signals have been detected from more than 40 MSPs with the Fermi Large Area Telescope (LAT). The wider radio beams and more compact magnetospheres of MSPs enable studies of emission geometries over a broader range of phase space than non-recycled radio-loud gamma-ray pulsars. We have modeled the gamma-ray light curves of 40 LAT-detected MSPs using geometric emission models assuming a vacuum retarded-dipole magnetic field. We modeled the radio profiles using a single-altitude hollow-cone beam, with a core component when indicated by polarimetry; however, for MSPs with gamma-ray and radio light curve peaks occurring at nearly the same rotational phase, we assume that the radio emission is co-located with the gamma rays and caustic in nature. The best-fit parameters and confidence intervals are determined using amaximum likelihood technique.We divide the light curves into three model classes, with gamma-ray peaks trailing (Class I), aligned (Class II), or leading (Class III) the radio peaks. Outer gap and slot gap (two-pole caustic) models best fit roughly equal numbers of Class I and II, while Class III are exclusively fit with pair-starved polar cap models. Distinguishing between the model classes based on typical derived parameters is difficult. We explore the evolution of the magnetic inclination angle with period and spin-down power, finding possible correlations. While the presence of significant off-peak emission can often be used as a discriminator between outer gap and slot gap models, a hybrid model may be needed.

  2. The distribution of mass for spiral galaxies in clusters and in the field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forbes, D.A.; Whitmore, B.C.

    1989-04-01

    A comparison is made between the mass distributions of spiral galaxies in clusters and in the field using Burstein's mass-type methodology. Both the H-alpha emission-line rotation curves and more extended H I rotation curves are used. The fitting technique for determining mass types used by Burstein and coworkers has been replaced by an objective chi-sq method. Mass types are shown to be a function of both the Hubble type and luminosity, contrary to earlier results. The present data show a difference in the distribution of mass types for spiral galaxies in the field and in clusters, in the sense thatmore » mass type I galaxies, where the inner and outer velocity gradients are similar, are generally found in the field rather than in clusters. This can be understood in terms of the results of Whitmore, Forbes, and Rubin (1988), who find that the rotation curves of galaxies in the central region of clusters are generally failing, while the outer galaxies in a cluster and field galaxies tend to have flat or rising rotation curves. 15 refs.« less

  3. Classification of resistance to passive motion using minimum probability of error criterion.

    PubMed

    Chan, H C; Manry, M T; Kondraske, G V

    1987-01-01

    Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.

  4. CONFIRMATION OF HOT JUPITER KEPLER-41b VIA PHASE CURVE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, Elisa V.; Rowe, Jason F.; Caldwell, Douglas A.

    We present high precision photometry of Kepler-41, a giant planet in a 1.86 day orbit around a G6V star that was recently confirmed through radial velocity measurements. We have developed a new method to confirm giant planets solely from the photometric light curve, and we apply this method herein to Kepler-41 to establish the validity of this technique. We generate a full phase photometric model by including the primary and secondary transits, ellipsoidal variations, Doppler beaming, and reflected/emitted light from the planet. Third light contamination scenarios that can mimic a planetary transit signal are simulated by injecting a full rangemore » of dilution values into the model, and we re-fit each diluted light curve model to the light curve. The resulting constraints on the maximum occultation depth and stellar density combined with stellar evolution models rules out stellar blends and provides a measurement of the planet's mass, size, and temperature. We expect about two dozen Kepler giant planets can be confirmed via this method.« less

  5. Characteristic overpressure-impulse-distance curves for vapour cloud explosions using the TNO Multi-Energy model.

    PubMed

    Díaz Alonso, Fernando; González Ferradás, Enrique; Sánchez Pérez, Juan Francisco; Miñana Aznar, Agustín; Ruiz Gimeno, José; Martínez Alonso, Jesús

    2006-09-21

    A number of models have been proposed to calculate overpressure and impulse from accidental industrial explosions. When the blast is produced by ignition of a vapour cloud, the TNO Multi-Energy model is widely used. From the curves given by this model, data are fitted to obtain equations showing the relationship between overpressure, impulse and distance. These equations, referred herein as characteristic curves, can be fitted by means of power equations, which depend on explosion energy and charge strength. Characteristic curves allow the determination of overpressure and impulse at each distance.

  6. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice

    PubMed Central

    Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.

    2015-01-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615

  7. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  8. 2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT

    NASA Astrophysics Data System (ADS)

    Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.

    2018-01-01

    We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.

  9. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  10. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  11. Non-linear Growth Models in Mplus and SAS

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  12. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Measurement of Ωm, ΩΛ from a Blind Analysis of Type Ia Supernovae with CMAGIC: Using Color Information to Verify the Acceleration of the Universe

    NASA Astrophysics Data System (ADS)

    Conley, A.; Goldhaber, G.; Wang, L.; Aldering, G.; Amanullah, R.; Commins, E. D.; Fadeyev, V.; Folatelli, G.; Garavini, G.; Gibbons, R.; Goobar, A.; Groom, D. E.; Hook, I.; Howell, D. A.; Kim, A. G.; Knop, R. A.; Kowalski, M.; Kuznetsova, N.; Lidman, C.; Nobili, S.; Nugent, P. E.; Pain, R.; Perlmutter, S.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Strovink, M.; Thomas, R. C.; Wood-Vasey, W. M.; Supernova Cosmology Project

    2006-06-01

    We present measurements of Ωm and ΩΛ from a blind analysis of 21 high-redshift supernovae using a new technique (CMAGIC) for fitting the multicolor light curves of Type Ia supernovae, first introduced by Wang and coworkers. CMAGIC takes advantage of the remarkably simple behavior of Type Ia supernovae on color-magnitude diagrams and has several advantages over current techniques based on maximum magnitudes. Among these are a reduced sensitivity to host galaxy dust extinction, a shallower luminosity-width relation, and the relative simplicity of the fitting procedure. This allows us to provide a cross-check of previous supernova cosmology results, despite the fact that current data sets were not observed in a manner optimized for CMAGIC. We describe the details of our novel blindness procedure, which is designed to prevent experimenter bias. The data are broadly consistent with the picture of an accelerating universe and agree with a flat universe within 1.7 σ, including systematics. We also compare the CMAGIC results directly with those of a maximum magnitude fit to the same supernovae, finding that CMAGIC favors more acceleration at the 1.6 σ level, including systematics and the correlation between the two measurements. A fit for w assuming a flat universe yields a value that is consistent with a cosmological constant within 1.2 σ.

  14. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  15. Buckling behavior of Rene 41 tubular panels for a hypersonic aircraft wing

    NASA Technical Reports Server (NTRS)

    Ko, W. L.; Fields, R. A.; Shideler, J. L.

    1986-01-01

    The buckling characteristics of Rene 41 tubular panels for a hypersonic aircraft wing were investigated. The panels were repeatedly tested for buckling characteristics using a hypersonic wing test structure and a universal tension/compression testing machine. The nondestructive buckling tests were carried out under different combined load conditions and in different temperature environments. The force/stiffness technique was used to determine the buckling loads of the panels. In spite of some data scattering resulting from large extrapolations of the data-fitting curve (because of the termination of applied loads at relatively low percentages of the buckling loads), the overall test data correlate fairly well with theoretically predicted buckling interaction curves. Also, the structural efficiency of the tubular panels was found to be slightly higher than that of beaded panels.

  16. Buckling behavior of Rene 41 tubular panels for a hypersonic aircraft wing

    NASA Technical Reports Server (NTRS)

    Ko, W. L.; Shideler, J. L.; Fields, R. A.

    1986-01-01

    The buckling characteristics of Rene 41 tubular panels for a hypersonic aircraft wing were investigated. The panels were repeatedly tested for buckling characteristics using a hypersonic wing test structure and a universal tension/compression testing machine. The nondestructive buckling tests were carried out under different combined load conditions and in different temperature environments. The force/stiffness technique was used to determine the buckling loads of the panel. In spite of some data scattering, resulting from large extrapolations of the data fitting curve (because of the termination of applied loads at relatively low percentages of the buckling loads), the overall test data correlate fairly well with theoretically predicted buckling interaction curves. Also, the structural efficiency of the tubular panels was found to be slightly higher than that of beaded panels.

  17. Excess junction current of silicon solar cells

    NASA Technical Reports Server (NTRS)

    Wang, E. Y.; Legge, R. N.; Christidis, N.

    1973-01-01

    The current-voltage characteristics of n(plus)-p silicon solar cells with 0.1, 1.0, 2.0, and 10 ohm-cm p-type base materials have been examined in detail. In addition to the usual I-V measurements, we have studied the temperature dependence of the slope of the I-V curve at the origin by the lock-in technique. The excess junction current coefficient (Iq) deduced from the slope at the origin depends on the square root of the intrinsic carrier concentration. The Iq obtained from the I-V curve fitting over the entire forward bias region at various temperatures shows the same temperature dependence. This result, in addition to the presence of an aging effect, suggest that the surface channel effect is the dominant cause of the excess junction current.

  18. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  19. High-k shallow traps observed by charge pumping with varying discharging times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Szu-Han; Chen, Ching-En; Tseng, Tseung-Yuen

    2013-11-07

    In this paper, we investigate the influence of falling time and base level time on high-k bulk shallow traps measured by charge pumping technique in n-channel metal-oxide-semiconductor field-effect transistors with HfO{sub 2}/metal gate stacks. N{sub T}-V{sub high} {sub level} characteristic curves with different duty ratios indicate that the electron detrapping time dominates the value of N{sub T} for extra contribution of I{sub cp} traps. N{sub T} is the number of traps, and I{sub cp} is charge pumping current. By fitting discharge formula at different temperatures, the results show that extra contribution of I{sub cp} traps at high voltage are inmore » fact high-k bulk shallow traps. This is also verified through a comparison of different interlayer thicknesses and different Ti{sub x}N{sub 1−x} metal gate concentrations. Next, N{sub T}-V{sub high} {sub level} characteristic curves with different falling times (t{sub falling} {sub time}) and base level times (t{sub base} {sub level}) show that extra contribution of I{sub cp} traps decrease with an increase in t{sub falling} {sub time}. By fitting discharge formula for different t{sub falling} {sub time}, the results show that electrons trapped in high-k bulk shallow traps first discharge to the channel and then to source and drain during t{sub falling} {sub time}. This current cannot be measured by the charge pumping technique. Subsequent measurements of N{sub T} by charge pumping technique at t{sub base} {sub level} reveal a remainder of electrons trapped in high-k bulk shallow traps.« less

  20. Modeling Pathways of Character Development across the First Three Decades of Life: An Application of Integrative Data Analysis Techniques to Understanding the Development of Hopeful Future Expectations.

    PubMed

    Callina, Kristina Schmid; Johnson, Sara K; Tirrell, Jonathan M; Batanova, Milena; Weiner, Michelle B; Lerner, Richard M

    2017-06-01

    There were two purposes of the present research: first, to add to scholarship about a key character virtue, hopeful future expectations; and second, to demonstrate a recent innovation in longitudinal methodology that may be especially useful in enhancing the understanding of the developmental course of hopeful future expectations and other character virtues that have been the focus of recent scholarship in youth development. Burgeoning interest in character development has led to a proliferation of short-term, longitudinal studies on character. These data sets are sometimes limited in their ability to model character development trajectories due to low power or relatively brief time spans assessed. However, the integrative data analysis approach allows researchers to pool raw data across studies in order to fit one model to an aggregated data set. The purpose of this article is to demonstrate the promises and challenges of this new tool for modeling character development. We used data from four studies evaluating youth character strengths in different settings to fit latent growth curve models of hopeful future expectations from participants aged 7 through 26 years. We describe the analytic strategy for pooling the data and modeling the growth curves. Implications for future research are discussed in regard to the advantages of integrative data analysis. Finally, we discuss issues researchers should consider when applying these techniques in their own work.

  1. Simple and Reliable Determination of Intravoxel Incoherent Motion Parameters for the Differential Diagnosis of Head and Neck Tumors

    PubMed Central

    Sasaki, Miho; Sumi, Misa; Eida, Sato; Katayama, Ikuo; Hotokezaka, Yuka; Nakamura, Takashi

    2014-01-01

    Intravoxel incoherent motion (IVIM) imaging can characterize diffusion and perfusion of normal and diseased tissues, and IVIM parameters are authentically determined by using cumbersome least-squares method. We evaluated a simple technique for the determination of IVIM parameters using geometric analysis of the multiexponential signal decay curve as an alternative to the least-squares method for the diagnosis of head and neck tumors. Pure diffusion coefficients (D), microvascular volume fraction (f), perfusion-related incoherent microcirculation (D*), and perfusion parameter that is heavily weighted towards extravascular space (P) were determined geometrically (Geo D, Geo f, and Geo P) or by least-squares method (Fit D, Fit f, and Fit D*) in normal structures and 105 head and neck tumors. The IVIM parameters were compared for their levels and diagnostic abilities between the 2 techniques. The IVIM parameters were not able to determine in 14 tumors with the least-squares method alone and in 4 tumors with the geometric and least-squares methods. The geometric IVIM values were significantly different (p<0.001) from Fit values (+2±4% and −7±24% for D and f values, respectively). Geo D and Fit D differentiated between lymphomas and SCCs with similar efficacy (78% and 80% accuracy, respectively). Stepwise approaches using combinations of Geo D and Geo P, Geo D and Geo f, or Fit D and Fit D* differentiated between pleomorphic adenomas, Warthin tumors, and malignant salivary gland tumors with the same efficacy (91% accuracy = 21/23). However, a stepwise differentiation using Fit D and Fit f was less effective (83% accuracy = 19/23). Considering cumbersome procedures with the least squares method compared with the geometric method, we concluded that the geometric determination of IVIM parameters can be an alternative to least-squares method in the diagnosis of head and neck tumors. PMID:25402436

  2. Parametric soil water retention models: a critical evaluation of expressions for the full moisture range

    NASA Astrophysics Data System (ADS)

    Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane

    2018-02-01

    Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.

  3. ASTEROID LIGHT CURVES FROM THE PALOMAR TRANSIENT FACTORY SURVEY: ROTATION PERIODS AND PHASE FUNCTIONS FROM SPARSE PHOTOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waszczak, Adam; Chang, Chan-Kao; Cheng, Yu-Chi

    We fit 54,296 sparsely sampled asteroid light curves in the Palomar Transient Factory survey to a combined rotation plus phase-function model. Each light curve consists of 20 or more observations acquired in a single opposition. Using 805 asteroids in our sample that have reference periods in the literature, we find that the reliability of our fitted periods is a complicated function of the period, amplitude, apparent magnitude, and other light-curve attributes. Using the 805-asteroid ground-truth sample, we train an automated classifier to estimate (along with manual inspection) the validity of the remaining ∼53,000 fitted periods. By this method we findmore » that 9033 of our light curves (of ∼8300 unique asteroids) have “reliable” periods. Subsequent consideration of asteroids with multiple light-curve fits indicates a 4% contamination in these “reliable” periods. For 3902 light curves with sufficient phase-angle coverage and either a reliable fit period or low amplitude, we examine the distribution of several phase-function parameters, none of which are bimodal though all correlate with the bond albedo and with visible-band colors. Comparing the theoretical maximal spin rate of a fluid body with our amplitude versus spin-rate distribution suggests that, if held together only by self-gravity, most asteroids are in general less dense than ∼2 g cm{sup −3}, while C types have a lower limit of between 1 and 2 g cm{sup −3}. These results are in agreement with previous density estimates. For 5–20 km diameters, S types rotate faster and have lower amplitudes than C types. If both populations share the same angular momentum, this may indicate the two types’ differing ability to deform under rotational stress. Lastly, we compare our absolute magnitudes (and apparent-magnitude residuals) to those of the Minor Planet Center’s nominal (G = 0.15, rotation-neglecting) model; our phase-function plus Fourier-series fitting reduces asteroid photometric rms scatter by a factor of ∼3.« less

  4. Modeling two strains of disease via aggregate-level infectivity curves.

    PubMed

    Romanescu, Razvan; Deardon, Rob

    2016-04-01

    Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.

  5. Fitting relationship between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam

    NASA Astrophysics Data System (ADS)

    Ji, Zhong-Ye; Zhang, Xiao-Fang

    2018-01-01

    The mathematical relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam is important in beam quality control theory of the high-energy laser weapon system. In order to obtain this mathematical relation, numerical simulation is used in the research. Firstly, the Zernike representations of typically distorted atmospheric wavefront aberrations caused by the Kolmogoroff turbulence are generated. And then, the corresponding beam quality β factors of the different distorted wavefronts are calculated numerically through fast Fourier transform. Thus, the statistical distribution rule between the beam quality β factors of high-energy laser and the wavefront aberrations of the beam can be established by the calculated results. Finally, curve fitting method is chosen to establish the mathematical fitting relationship of these two parameters. And the result of the curve fitting shows that there is a quadratic curve relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam. And in this paper, 3 fitting curves, in which the wavefront aberrations are consisted of Zernike Polynomials of 20, 36, 60 orders individually, are established to express the relationship between the beam quality β factor and atmospheric wavefront aberrations with different spatial frequency.

  6. Comparing dark matter models, modified Newtonian dynamics and modified gravity in accounting for galaxy rotation curves

    NASA Astrophysics Data System (ADS)

    Li, Xin; Tang, Li; Lin, Hai-Nan

    2017-05-01

    We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)

  7. Atmospheric particulate analysis using angular light scattering

    NASA Technical Reports Server (NTRS)

    Hansen, M. Z.

    1980-01-01

    Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.

  8. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  9. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  10. Accelerated pharmacokinetic map determination for dynamic contrast enhanced MRI using frequency-domain based Tofts model.

    PubMed

    Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam

    2014-01-01

    Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.

  11. Computer codes for the evaluation of thermodynamic and transport properties for equilibrium air to 30000 K

    NASA Technical Reports Server (NTRS)

    Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.

    1991-01-01

    The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.

  12. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  13. Rate Constants for Fine-Structure Excitations in O - H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman

    2017-04-01

    Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.

  14. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  15. ARPEFS as an analytic technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schach von Wittenau, A.E.

    1991-04-01

    Two modifications to the ARPEFS technique are introduced. These are studied using p(2 {times} 2)S/Cu(001) as a model system. The first modification is the obtaining of ARPEFS {chi}(k) curves at temperatures as low as our equipment will permit. While adding to the difficulty of the experiment, this modification is shown to almost double the signal-to-noise ratio of normal emission p(2 {times} 2)S/Cu(001) {chi}(k) curves. This is shown by visual comparison of the raw data and by the improved precision of the extracted structural parameters. The second change is the replacement of manual fitting of the Fourier filtered {chi}(k) curves bymore » the use of the simplex algorithm for parameter determination. Again using p(2 {times} 2)S/Cu(001) data, this is shown to result in better agreement between experimental {chi}(k) curves and curves calculated based on model structures. The improved ARPEFS is then applied to p(2 {times} 2)S/Ni(111) and ({radical}3 {times} {radical}3) R30{degree}S/Ni(111). For p(2 {times} 2)S/Cu(001) we find a S-Cu bond length of 2.26 {Angstrom}, with the S adatom 1.31 {Angstrom} above the fourfold hollow site. The second Cu layer appears to be corrugated. Analysis of the p(2 {times} 2)S/Ni(111) data indicates that the S adatom adatom adsorbs onto the FCC threefold hollow site 1.53 {Angstrom} above the Ni surface. The S-Ni bond length is determined to be 2.13 {Angstrom}, indicating an outwards shift of the first layer Ni atoms. We are unable to assign a unique structure to ({radical}3 {times} {radical}3)R30{degree}S/Ni(111). An analysis of the strengths and weaknesses of ARPEFS as an experimental and analytic technique is presented, along with a summary of problems still to be addressed.« less

  16. Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures

    PubMed Central

    Bayer, Jason D.

    2014-01-01

    We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911

  17. Study on peak shape fitting method in radon progeny measurement.

    PubMed

    Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju

    2015-11-01

    Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. [An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].

    PubMed

    Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu

    2016-04-01

    The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.

  19. A Synthesis of Solar Cycle Prediction Techniques

    NASA Technical Reports Server (NTRS)

    Hathaway, David H.; Wilson, Robert M.; Reichmann, Edwin J.

    1999-01-01

    A number of techniques currently in use for predicting solar activity on a solar cycle timescale are tested with historical data. Some techniques, e.g., regression and curve fitting, work well as solar activity approaches maximum and provide a month-by-month description of future activity, while others, e.g., geomagnetic precursors, work well near solar minimum but only provide an estimate of the amplitude of the cycle. A synthesis of different techniques is shown to provide a more accurate and useful forecast of solar cycle activity levels. A combination of two uncorrelated geomagnetic precursor techniques provides a more accurate prediction for the amplitude of a solar activity cycle at a time well before activity minimum. This combined precursor method gives a smoothed sunspot number maximum of 154 plus or minus 21 at the 95% level of confidence for the next cycle maximum. A mathematical function dependent on the time of cycle initiation and the cycle amplitude is used to describe the level of solar activity month by month for the next cycle. As the time of cycle maximum approaches a better estimate of the cycle activity is obtained by including the fit between previous activity levels and this function. This Combined Solar Cycle Activity Forecast gives, as of January 1999, a smoothed sunspot maximum of 146 plus or minus 20 at the 95% level of confidence for the next cycle maximum.

  20. Combined evaluation of grazing incidence X-ray fluorescence and X-ray reflectivity data for improved profiling of ultra-shallow depth distributions☆

    PubMed Central

    Ingerle, D.; Meirer, F.; Pepponi, G.; Demenev, E.; Giubertoni, D.; Wobrauschek, P.; Streli, C.

    2014-01-01

    The continuous downscaling of the process size for semiconductor devices pushes the junction depths and consequentially the implantation depths to the top few nanometers of the Si substrate. This motivates the need for sensitive methods capable of analyzing dopant distribution, total dose and possible impurities. X-ray techniques utilizing the external reflection of X-rays are very surface sensitive, hence providing a non-destructive tool for process analysis and control. X-ray reflectometry (XRR) is an established technique for the characterization of single- and multi-layered thin film structures with layer thicknesses in the nanometer range. XRR spectra are acquired by varying the incident angle in the grazing incidence regime while measuring the specular reflected X-ray beam. The shape of the resulting angle-dependent curve is correlated to changes of the electron density in the sample, but does not provide direct information on the presence or distribution of chemical elements in the sample. Grazing Incidence XRF (GIXRF) measures the X-ray fluorescence induced by an X-ray beam incident under grazing angles. The resulting angle dependent intensity curves are correlated to the depth distribution and mass density of the elements in the sample. GIXRF provides information on contaminations, total implanted dose and to some extent on the depth of the dopant distribution, but is ambiguous with regard to the exact distribution function. Both techniques use similar measurement procedures and data evaluation strategies, i.e. optimization of a sample model by fitting measured and calculated angle curves. Moreover, the applied sample models can be derived from the same physical properties, like atomic scattering/form factors and elemental concentrations; a simultaneous analysis is therefore a straightforward approach. This combined analysis in turn reduces the uncertainties of the individual techniques, allowing a determination of dose and depth profile of the implanted elements with drastically increased confidence level. Silicon wafers implanted with Arsenic at different implantation energies were measured by XRR and GIXRF using a combined, simultaneous measurement and data evaluation procedure. The data were processed using a self-developed software package (JGIXA), designed for simultaneous fitting of GIXRF and XRR data. The results were compared with depth profiles obtained by Secondary Ion Mass Spectrometry (SIMS). PMID:25202165

  1. Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans

    NASA Astrophysics Data System (ADS)

    Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj

    2016-06-01

    This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.

  2. The integrated Michaelis-Menten rate equation: déjà vu or vu jàdé?

    PubMed

    Goličnik, Marko

    2013-08-01

    A recent article of Johnson and Goody (Biochemistry, 2011;50:8264-8269) described the almost-100-years-old paper of Michaelis and Menten. Johnson and Goody translated this classic article and presented the historical perspective to one of incipient enzyme-reaction data analysis, including a pioneering global fit of the integrated rate equation in its implicit form to the experimental time-course data. They reanalyzed these data, although only numerical techniques were used to solve the model equations. However, there is also the still little known algebraic rate-integration equation in a closed form that enables direct fitting of the data. Therefore, in this commentary, I briefly present the integral solution of the Michaelis-Menten rate equation, which has been largely overlooked for three decades. This solution is expressed in terms of the Lambert W function, and I demonstrate here its use for global nonlinear regression curve fitting, as carried out with the original time-course dataset of Michaelis and Menten.

  3. Comparison of software and human observers in reading images of the CDMAM test object to assess digital mammography systems

    NASA Astrophysics Data System (ADS)

    Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde

    2006-03-01

    European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.

  4. Modification of smoothing in 4253H[T

    NASA Astrophysics Data System (ADS)

    Azmi, Nurul Nisa'Khairol; Adam, Mohd Bakri; Shitan, Mahendran; Ali, Norhaslinda Mohd

    2017-05-01

    Some modified non-linear smoothers particularly 4253H[T] are explained in this paper. The modifications are focused on estimating the middle point of running median for even span by applying the following types of means; geometric, harmonic, quadratic and contraharmonic. The performance of the techniques is assessed by applying it to daily price index of a bank in Malaysia that issues sukuk for funding in Islamic banking and financial business. The results show that 4253H[T] with geometric mean modification is better than others in preserving variation and curve fitting.

  5. Mathematical Modelling of Waveguiding Techniques and Electron Transport. Volume 1.

    DTIC Science & Technology

    1984-01-01

    II& 100 ,,.,,hi , ""%’. "-.. v -.. , ., - , , .. . . . . . .. .. . . . . .. w ,, lit ( Hit -~~ ~ (i7(2J- LKI-r~p T (4\\ 01I10T J~n Ks~ (EL 1011 -A...at the end of each output time step. The difficulty here is that the last working time step is then simply what is required to hit the output time... Tabata (2 2 ) curve fit algorithm. The comparison of the energy deposition profiles for the 1.0 MeV case is given in Table 4. More complete tables are

  6. Electron Heating and Quasiparticle Tunnelling in Superconducting Charge Qubits

    NASA Technical Reports Server (NTRS)

    Shaw, M. D.; Bueno, J.; Delsing, P.; Echternach, P. M.

    2008-01-01

    We have directly measured non-equilibrium quasiparticle tunnelling in the time domain as a function of temperature and RF carrier power for a pair of charge qubits based on the single Cooper-pair box, where the readout is performed with a multiplexed quantum capacitance technique. We have extracted an effective electron temperature for each applied RF power, using the data taken at the lowest power as a reference curve. This data has been fit to a standard T? electron heating model, with a reasonable correspondence with established material parameters.

  7. Fatigue behavior of porous biomaterials manufactured using selective laser melting.

    PubMed

    Yavari, S Amin; Wauthle, R; van der Stok, J; Riemslag, A C; Janssen, M; Mulier, M; Kruth, J P; Schrooten, J; Weinans, H; Zadpoor, A A

    2013-12-01

    Porous titanium alloys are considered promising bone-mimicking biomaterials. Additive manufacturing techniques such as selective laser melting allow for manufacturing of porous titanium structures with a precise design of micro-architecture. The mechanical properties of selective laser melted porous titanium alloys with different designs of micro-architecture have been already studied and are shown to be in the range of mechanical properties of bone. However, the fatigue behavior of this biomaterial is not yet well understood. We studied the fatigue behavior of porous structures made of Ti6Al4V ELI powder using selective laser melting. Four different porous structures were manufactured with porosities between 68 and 84% and the fatigue S-N curves of these four porous structures were determined. The three-stage mechanism of fatigue failure of these porous structures is described and studied in detail. It was found that the absolute S-N curves of these four porous structures are very different. In general, given the same absolute stress level, the fatigue life is much shorter for more porous structures. However, the normalized fatigue S-N curves of these four structures were found to be very similar. A power law was fitted to all data points of the normalized S-N curves. It is shown that the measured data points conform to the fitted power law very well, R(2)=0.94. This power law may therefore help in estimating the fatigue life of porous structures for which no fatigue test data is available. It is also observed that the normalized endurance limit of all tested porous structures (<0.2) is lower than that of corresponding solid material (c.a. 0.4). © 2013.

  8. On Correlated-noise Analyses Applied to Exoplanet Light Curves

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison

    2017-01-01

    Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.

  9. Dynamic rating curve assessment for hydrometric stations and computation of the associated uncertainties: Quality and station management indicators

    NASA Astrophysics Data System (ADS)

    Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan

    2014-09-01

    A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.

  10. Consideration of Wear Rates at High Velocities

    DTIC Science & Technology

    2010-03-01

    Strain vs. Three-dimensional Model . . . . . . . . . . . . 57 3.11 Example Single Asperity Wear Rate Integral . . . . . . . . . . 58 4.1 Third Stage...Slipper Accumulated Frictional Heating . . . . . . 67 4.2 Surface Temperature Third Stage Slipper, ave=0.5 . . . . . . . 67 4.3 Melt Depth Example...64 A3S Coefficient for Frictional Heat Curve Fit, Third Stage Slipper 66 B3S Coefficient for Frictional Heat Curve Fit, Third

  11. Analyser-based phase contrast image reconstruction using geometrical optics.

    PubMed

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-07-21

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.

  12. Using quasars as standard clocks for measuring cosmological redshift.

    PubMed

    Dai, De-Chang; Starkman, Glenn D; Stojkovic, Branislav; Stojkovic, Dejan; Weltman, Amanda

    2012-06-08

    We report hitherto unnoticed patterns in quasar light curves. We characterize segments of the quasar's light curves with the slopes of the straight lines fit through them. These slopes appear to be directly related to the quasars' redshifts. Alternatively, using only global shifts in time and flux, we are able to find significant overlaps between the light curves of different pairs of quasars by fitting the ratio of their redshifts. We are then able to reliably determine the redshift of one quasar from another. This implies that one can use quasars as standard clocks, as we explicitly demonstrate by constructing two independent methods of finding the redshift of a quasar from its light curve.

  13. Subsurface water parameters: optimization approach to their determination from remotely sensed water color data.

    PubMed

    Jain, S C; Miller, J R

    1976-04-01

    A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.

  14. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  15. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ordoñez, Antonio J.; Sarajedini, Ata; Yang, Soung-Chul, E-mail: a.ordonez@ufl.edu, E-mail: ata@astro.ufl.edu, E-mail: sczoo@kasi.re.kr

    We present the first detailed study of the RR Lyrae variable population in the Local Group dSph/dIrr transition galaxy, Phoenix, using previously obtained HST/WFPC2 observations of the galaxy. We utilize template light curve fitting routines to obtain best fit light curves for RR Lyrae variables in Phoenix. Our technique has identified 78 highly probable RR Lyrae stars (54 ab-type; 24 c-type) with about 40 additional candidates. We find mean periods for the two populations of (P {sub ab}) = 0.60 ± 0.03 days and (P{sub c} ) = 0.353 ± 0.002 days. We use the properties of these light curvesmore » to extract, among other things, a metallicity distribution function for ab-type RR Lyrae. Our analysis yields a mean metallicity of ([Fe/H]) = –1.68 ± 0.06 dex for the RRab stars. From the mean period and metallicity calculated from the ab-type RR Lyrae, we conclude that Phoenix is more likely of intermediate Oosterhoff type; however the morphology of the Bailey diagram for Phoenix RR Lyraes appears similar to that of an Oosterhoff type I system. Using the RRab stars, we also study the chemical enrichment law for Phoenix. We find that our metallicity distribution is reasonably well fitted by a closed-box model. The parameters of this model are compatible with the findings of Hidalgo et al., further supporting the idea that Phoenix appears to have been chemically enriched as a closed-box-like system during the early stage of its formation and evolution.« less

  17. Deuterium and Oxygen Toward Feige 110: Results from the Far Ultraviolet Spectroscopic Explorer (FUSE) Mission

    NASA Technical Reports Server (NTRS)

    Friedman, S. D.; Howk, J. C.; Chayer, P.; Tripp, T. M.; Hebrard, G.; Andre, M.; Oliveira, C.; Jenkins, E. B.; Moos, H. W.; Oegerle, William R.

    2001-01-01

    We present measurements of the column densities of interstellar D I and O I made with the Far Ultraviolet Spectroscopic Explorer (FUSE), and of H I made with the International Ultraviolet Explorer (IUE) toward the sdOB star Feige 110 [(l,b) = (74.09 deg., - 59.07 deg.); d = 179(sup +265, sub -67) pc; Z = -154(sup +57, Sub -227 pc). Our determination of the D I column density made use of curve of growth fitting and profile fitting analyses, while our O I column density determination used only curve of growth techniques. The H I column density was estimated by fitting the damping wings of the interstellar Ly(lpha) profile. We find log N(D I) = 15.47 +/- 0.06, log N(O I) = 16.73 +/- 0.10, and log N(H I) = 20.14(sup +0.13, sub -0.20) (all errors 2(sigma)). This implies D/H = (2.14 +/- 0.82) x 10(esp -5), D/O = (5.50(sup + 1.64, sub -133)) x 10(exp -2), and O/H = (3.89 +/- 1.67) x 10(exp -4). Taken with the FUSE results reported in companion papers and previous measurements of the local interstellar medium, this suggests the possibility of spatial variability in D/H for sight lines exceeding approx. 100 pc. This result may constrain models which characterize the mixing time and length scales of material in the local interstellar medium.

  18. Surface fitting three-dimensional bodies

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.

    1974-01-01

    The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.

  19. Contribution to the benchmark for ternary mixtures: Transient analysis in microgravity conditions.

    PubMed

    Ahadi, Amirhossein; Ziad Saghir, M

    2015-04-01

    We present a transient experimental analysis of the DCMIX1 project conducted onboard the International Space Station for a ternary tetrahydronaphtalene, isobutylbenzene, n-dodecane mixture. Raw images taken in microgravity environment using the SODI (Selectable Optical Diagnostic) apparatus which is equipped with two wavelength diagnostic were processed and the results were analyzed in this work. We measured the concentration profile of the mixture containing 80% THN, 10% IBB and 10% nC12 during the entire experiment using an advanced image processing technique and accordingly we determined the Soret coefficients using an advanced curve-fitting and post-processing technique. It must be noted that the experiment has been repeated five times to ensure the repeatability of the experiment.

  20. A variable-gain output feedback control design approach

    NASA Technical Reports Server (NTRS)

    Haylo, Nesim

    1989-01-01

    A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.

  1. Operant Conditioning in Honey Bees (Apis mellifera L.): The Cap Pushing Response.

    PubMed

    Abramson, Charles I; Dinges, Christopher W; Wells, Harrington

    2016-01-01

    The honey bee has been an important model organism for studying learning and memory. More recently, the honey bee has become a valuable model to understand perception and cognition. However, the techniques used to explore psychological phenomena in honey bees have been limited to only a few primary methodologies such as the proboscis extension reflex, sting extension reflex, and free flying target discrimination-tasks. Methods to explore operant conditioning in bees and other invertebrates are not as varied as with vertebrates. This may be due to the availability of a suitable response requirement. In this manuscript we offer a new method to explore operant conditioning in honey bees: the cap pushing response (CPR). We used the CPR to test for difference in learning curves between novel auto-shaping and more traditional explicit-shaping. The CPR protocol requires bees to exhibit a novel behavior by pushing a cap to uncover a food source. Using the CPR protocol we tested the effects of both explicit-shaping and auto-shaping techniques on operant conditioning. The goodness of fit and lack of fit of these data to the Rescorla-Wagner learning-curve model, widely used in classical conditioning studies, was tested. The model fit well to both control and explicit-shaping results, but only for a limited number of trials. Learning ceased rather than continuing to asymptotically approach the physiological most accurate possible. Rate of learning differed between shaped and control bee treatments. Learning rate was about 3 times faster for shaped bees, but for all measures of proficiency control and shaped bees reached the same level. Auto-shaped bees showed one-trial learning rather than the asymptotic approach to a maximal efficiency. However, in terms of return-time, the auto-shaped bees' learning did not carry over to the covered-well test treatments.

  2. Operant Conditioning in Honey Bees (Apis mellifera L.): The Cap Pushing Response

    PubMed Central

    Abramson, Charles I.; Dinges, Christopher W.; Wells, Harrington

    2016-01-01

    The honey bee has been an important model organism for studying learning and memory. More recently, the honey bee has become a valuable model to understand perception and cognition. However, the techniques used to explore psychological phenomena in honey bees have been limited to only a few primary methodologies such as the proboscis extension reflex, sting extension reflex, and free flying target discrimination-tasks. Methods to explore operant conditioning in bees and other invertebrates are not as varied as with vertebrates. This may be due to the availability of a suitable response requirement. In this manuscript we offer a new method to explore operant conditioning in honey bees: the cap pushing response (CPR). We used the CPR to test for difference in learning curves between novel auto-shaping and more traditional explicit-shaping. The CPR protocol requires bees to exhibit a novel behavior by pushing a cap to uncover a food source. Using the CPR protocol we tested the effects of both explicit-shaping and auto-shaping techniques on operant conditioning. The goodness of fit and lack of fit of these data to the Rescorla-Wagner learning-curve model, widely used in classical conditioning studies, was tested. The model fit well to both control and explicit-shaping results, but only for a limited number of trials. Learning ceased rather than continuing to asymptotically approach the physiological most accurate possible. Rate of learning differed between shaped and control bee treatments. Learning rate was about 3 times faster for shaped bees, but for all measures of proficiency control and shaped bees reached the same level. Auto-shaped bees showed one-trial learning rather than the asymptotic approach to a maximal efficiency. However, in terms of return-time, the auto-shaped bees’ learning did not carry over to the covered-well test treatments. PMID:27626797

  3. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  4. Experimentally testing the dependence of momentum transport on second derivatives using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Chilenski, M. A.; Greenwald, M. J.; Hubbard, A. E.; Hughes, J. W.; Lee, J. P.; Marzouk, Y. M.; Rice, J. E.; White, A. E.

    2017-12-01

    It remains an open question to explain the dramatic change in intrinsic rotation induced by slight changes in electron density (White et al 2013 Phys. Plasmas 20 056106). One proposed explanation is that momentum transport is sensitive to the second derivatives of the temperature and density profiles (Lee et al 2015 Plasma Phys. Control. Fusion 57 125006), but it is widely considered to be impossible to measure these higher derivatives. In this paper, we show that it is possible to estimate second derivatives of electron density and temperature using a nonparametric regression technique known as Gaussian process regression. This technique avoids over-constraining the fit by not assuming an explicit functional form for the fitted curve. The uncertainties, obtained rigorously using Markov chain Monte Carlo sampling, are small enough that it is reasonable to explore hypotheses which depend on second derivatives. It is found that the differences in the second derivatives of n{e} and T{e} between the peaked and hollow rotation cases are rather small, suggesting that changes in the second derivatives are not likely to explain the experimental results.

  5. Transforaminal Lumbar Interbody Fusion with Rigid Interspinous Process Fixation: A Learning Curve Analysis of a Surgeon Team's First 74 Cases.

    PubMed

    Doherty, Patrick; Welch, Arthur; Tharpe, Jason; Moore, Camille; Ferry, Chris

    2017-05-30

    Studies have shown that a significant learning curve may be associated with adopting minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) with bilateral pedicle screw fixation (BPSF). Accordingly, several hybrid TLIF techniques have been proposed as surrogates to the accepted BPSF technique, asserting that less/fewer fixation(s) or less disruptive fixation may decrease the learning curve while still maintaining the minimally disruptive benefits. TLIF with interspinous process fixation (ISPF) is one such surrogate procedure. However, despite perceived ease of adaptability given the favorable proximity of the spinous processes, no evidence exists demonstrating whether or not the technique may possess its own inherent learning curve. The purpose of this study was to determine whether an intraoperative learning curve for one- and two-level TLIF + ISPF may exist for a single lead surgeon. Seventy-four consecutive patients who received one- or two-Level TLIF with rigid ISPF by a single lead surgeon were retrospectively reviewed. It was the first TLIF + ISPF case series for the lead surgeon. Intraoperative blood loss (EBL), hospitalization length-of-stay (LOS), fluoroscopy time, and postoperative complications were collected. EBL, LOS, and fluoroscopy time were modeled as a function of case number using multiple linear regression methods. A change point was included in each model to allow the trajectory of the outcomes to change during the duration of the case series. These change points were determined using profile likelihood methods. Models were fit using the maximum likelihood estimates for the change points. Age, sex, body mass index (BMI), and the number of treated levels were included as covariates. EBL, LOS, and fluoroscopy time did not significantly differ by age, sex, or BMI (p ≥ 0.12). Only EBL differed significantly by the number of levels (p = 0.026). The case number was not a significant predictor of EBL, LOS, or fluoroscopy time (p ≥ 0.21). At the time of data collection (mean time from surgery: 13.3 months), six patients had undergone revision due to interbody migration. No ISPF device complications were observed. Study outcomes support the ideal that TLIF + ISPF can be a readily adopted procedure without a significant intraoperative learning curve. However, the authors emphasize that further assessment of long-term healing outcomes is essential in fully characterizing both the efficacy and the indication learning curve for the TLIF + ISPF technique.

  6. Modeling Latent Growth Curves With Incomplete Data Using Different Types of Structural Equation Modeling and Multilevel Software

    ERIC Educational Resources Information Center

    Ferrer, Emilio; Hamagami, Fumiaki; McArdle, John J.

    2004-01-01

    This article offers different examples of how to fit latent growth curve (LGC) models to longitudinal data using a variety of different software programs (i.e., LISREL, Mx, Mplus, AMOS, SAS). The article shows how the same model can be fitted using both structural equation modeling and multilevel software, with nearly identical results, even in…

  7. A Monte Carlo Study of the Effect of Item Characteristic Curve Estimation on the Accuracy of Three Person-Fit Statistics

    ERIC Educational Resources Information Center

    St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane

    2009-01-01

    To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…

  8. Improvements in Spectrum's fit to program data tool.

    PubMed

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  9. THINGS about MOND

    NASA Astrophysics Data System (ADS)

    Gentile, G.; Famaey, B.; de Blok, W. J. G.

    2011-03-01

    We present an analysis of 12 high-resolution galactic rotation curves from The HI Nearby Galaxy Survey (THINGS) in the context of modified Newtonian dynamics (MOND). These rotation curves were selected to be the most reliable for mass modelling, and they are the highest quality rotation curves currently available for a sample of galaxies spanning a wide range of luminosities. We fit the rotation curves with the "simple" and "standard" interpolating functions of MOND, and we find that the "simple" function yields better results. We also redetermine the value of a0, and find a median value very close to the one determined in previous studies, a0 = (1.22 ± 0.33) × 10-8 cm s-2. Leaving the distance as a free parameter within the uncertainty of its best independently determined value leads to excellent quality fits for 75% of the sample. Among the three exceptions, two are also known to give relatively poor fits in Newtonian dynamics plus dark matter. The remaining case (NGC 3198) presents some tension between the observations and the MOND fit, which might, however, be explained by the presence of non-circular motions, by a small distance, or by a value of a0 at the lower end of our best-fit interval, 0.9 × 10-8 cm s-2. The best-fit stellar M/L ratios are generally in remarkable agreement with the predictions of stellar population synthesis models. We also show that the narrow range of gravitational accelerations found to be generated by dark matter in galaxies is consistent with the narrow range of additional gravity predicted by MOND.

  10. LIF and emission studies of copper and nitrogen

    NASA Technical Reports Server (NTRS)

    Akundi, Murty A.

    1990-01-01

    A technique is developed to determine the rotational temperature of nitrogen molecular ion, N2(+), from the emission spectra of B-X transition, when P and R branches are not resolved. Its validity is tested on simulated spectra of the 0-1 band of N2(+) produced under low resolution. The method is applied to experimental spectra of N2(+) taken in the shock layer of a blunt body at distances of 1.91, 2.54, and 3.18 cm from the body. The laser induced fluorescence (LIF) spectra of copper atoms is analyzed to obtain the free stream velocities and temperatures. The only broadening mechanism considered is Doppler broadening. The temperatures are obtained by manual curve fitting, and the results are compared with least square fits. The agreement on the average is within 10 percent.

  11. Acoustic mode measurements in the inlet of a model turbofan using a continuously rotating rake: Data collection/analysis techniques

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Heidelberg, Laurence; Konno, Kevin

    1993-01-01

    The rotating microphone measurement technique and data analysis procedures are documented which are used to determine circumferential and radial acoustic mode content in the inlet of the Advanced Ducted Propeller (ADP) model. Circumferential acoustic mode levels were measured at a series of radial locations using the Doppler frequency shift produced by a rotating inlet microphone probe. Radial mode content was then computed using a least squares curve fit with the measured radial distribution for each circumferential mode. The rotating microphone technique is superior to fixed-probe techniques because it results in minimal interference with the acoustic modes generated by rotor-stator interaction. This effort represents the first experimental implementation of a measuring technique developed by T. G. Sofrin. Testing was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. The design is included of the data analysis software and the performance of the rotating rake apparatus. The effect of experiment errors is also discussed.

  12. Acoustic mode measurements in the inlet of a model turbofan using a continuously rotating rake - Data collection/analysis techniques

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Heidelberg, Laurence; Konno, Kevin

    1993-01-01

    The rotating microphone measurement technique and data analysis procedures are documented which are used to determine circumferential and radial acoustic mode content in the inlet of the Advanced Ducted Propeller (ADP) model. Circumferential acoustic mode levels were measured at a series of radial locations using the Doppler frequency shift produced by a rotating inlet microphone probe. Radial mode content was then computed using a least squares curve fit with the measured radial distribution for each circumferential mode. The rotating microphone technique is superior to fixed-probe techniques because it results in minimal interference with the acoustic modes generated by rotor-stator interaction. This effort represents the first experimental implementation of a measuring technique developed by T. G. Sofrin. Testing was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. The design is included of the data analysis software and the performance of the rotating rake apparatus. The effect of experiment errors is also discussed.

  13. Impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing pulmonary dynamic contrast-enhanced MR imaging.

    PubMed

    Tokuda, Junichi; Mamata, Hatsuho; Gill, Ritu R; Hata, Nobuhiko; Kikinis, Ron; Padera, Robert F; Lenkinski, Robert E; Sugarbaker, David J; Hatabu, Hiroto

    2011-04-01

    To investigates the impact of nonrigid motion correction on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in patients with solitary pulmonary nodules (SPNs). Misalignment of focal lesions due to respiratory motion in free-breathing dynamic contrast-enhanced MRI (DCE-MRI) precludes obtaining reliable time-intensity curves, which are crucial for pharmacokinetic analysis for tissue characterization. Single-slice 2D DCE-MRI was obtained in 15 patients. Misalignments of SPNs were corrected using nonrigid B-spline image registration. Pixel-wise pharmacokinetic parameters K(trans) , v(e) , and k(ep) were estimated from both original and motion-corrected DCE-MRI by fitting the two-compartment pharmacokinetic model to the time-intensity curve obtained in each pixel. The "goodness-of-fit" was tested with χ(2) -test in pixel-by-pixel basis to evaluate the reliability of the parameters. The percentages of reliable pixels within the SPNs were compared between the original and motion-corrected DCE-MRI. In addition, the parameters obtained from benign and malignant SPNs were compared. The percentage of reliable pixels in the motion-corrected DCE-MRI was significantly larger than the original DCE-MRI (P = 4 × 10(-7) ). Both K(trans) and k(ep) derived from the motion-corrected DCE-MRI showed significant differences between benign and malignant SPNs (P = 0.024, 0.015). The study demonstrated the impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in SPNs. Copyright © 2011 Wiley-Liss, Inc.

  14. American Thyroid Association Statement on Remote-Access Thyroid Surgery

    PubMed Central

    Bernet, Victor; Fahey, Thomas J.; Kebebew, Electron; Shaha, Ashok; Stack, Brendan C.; Stang, Michael; Steward, David L.; Terris, David J.

    2016-01-01

    Background: Remote-access techniques have been described over the recent years as a method of removing the thyroid gland without an incision in the neck. However, there is confusion related to the number of techniques available and the ideal patient selection criteria for a given technique. The aims of this review were to develop a simple classification of these approaches, describe the optimal patient selection criteria, evaluate the outcomes objectively, and define the barriers to adoption. Methods: A review of the literature was performed to identify the described techniques. A simple classification was developed. Technical details, outcomes, and the learning curve were described. Expert opinion consensus was formulated regarding recommendations for patient selection and performance of remote-access thyroid surgery. Results: Remote-access thyroid procedures can be categorized into endoscopic or robotic breast, bilateral axillo-breast, axillary, and facelift approaches. The experience in the United States involves the latter two techniques. The limited data in the literature suggest long operative times, a steep learning curve, and higher costs with remote-access thyroid surgery compared with conventional thyroidectomy. Nevertheless, a consensus was reached that, in appropriate hands, it can be a viable option for patients with unilateral small nodules who wish to avoid a neck incision. Conclusions: Remote-access thyroidectomy has a role in a small group of patients who fit strict selection criteria. These approaches require an additional level of expertise, and therefore should be done by surgeons performing a high volume of thyroid and robotic surgery. PMID:26858014

  15. American Thyroid Association Statement on Remote-Access Thyroid Surgery.

    PubMed

    Berber, Eren; Bernet, Victor; Fahey, Thomas J; Kebebew, Electron; Shaha, Ashok; Stack, Brendan C; Stang, Michael; Steward, David L; Terris, David J

    2016-03-01

    Remote-access techniques have been described over the recent years as a method of removing the thyroid gland without an incision in the neck. However, there is confusion related to the number of techniques available and the ideal patient selection criteria for a given technique. The aims of this review were to develop a simple classification of these approaches, describe the optimal patient selection criteria, evaluate the outcomes objectively, and define the barriers to adoption. A review of the literature was performed to identify the described techniques. A simple classification was developed. Technical details, outcomes, and the learning curve were described. Expert opinion consensus was formulated regarding recommendations for patient selection and performance of remote-access thyroid surgery. Remote-access thyroid procedures can be categorized into endoscopic or robotic breast, bilateral axillo-breast, axillary, and facelift approaches. The experience in the United States involves the latter two techniques. The limited data in the literature suggest long operative times, a steep learning curve, and higher costs with remote-access thyroid surgery compared with conventional thyroidectomy. Nevertheless, a consensus was reached that, in appropriate hands, it can be a viable option for patients with unilateral small nodules who wish to avoid a neck incision. Remote-access thyroidectomy has a role in a small group of patients who fit strict selection criteria. These approaches require an additional level of expertise, and therefore should be done by surgeons performing a high volume of thyroid and robotic surgery.

  16. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    USGS Publications Warehouse

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  17. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  18. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  19. On the analytical determination of relaxation modulus of viscoelastic materials by Prony's interpolation method

    NASA Technical Reports Server (NTRS)

    Rodriguez, Pedro I.

    1986-01-01

    A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.

  20. Determining the hydraulic and fracture properties of the Coal Seam Gas well by numerical modelling and GLUE analysis

    NASA Astrophysics Data System (ADS)

    Askarimarnani, Sara; Willgoose, Garry; Fityus, Stephen

    2017-04-01

    Coal seam gas (CSG) is a form of natural gas that occurs in some coal seams. Coal seams have natural fractures with dual-porosity systems and low permeability. In the CSG industry, hydraulic fracturing is applied to increase the permeability and extract the gas more efficiently from the coal seam. The industry claims that it can design fracking patterns. Whether this is true or not, the public (and regulators) requires assurance that once a well has been fracked that the fracking has occurred according to plan and that the fracked well is safe. Thus defensible post-fracking testing methodologies for gas generating wells are required. In 2009 a fracked well HB02, owned by AGL, near Broke, NSW, Australia was subjected to "traditional" water pump-testing as part of this assurance process. Interpretation with well Type Curves and simple single phase (i.e. only water, no gas) highlighted deficiencies in traditional water well approaches with a systemic deviation from the qualitative characteristic of well drawdown curves (e.g. concavity versus convexity of drawdown with time). Accordingly a multiphase (i.e. water and methane) model of the well was developed and compared with the observed data. This paper will discuss the results of this multiphase testing using the TOUGH2 model and its EOS7C constitutive model. A key objective was to test a methodology, based on GLUE monte-carlo calibration technique, to calibrate the characteristics of the frack using the well test drawdown curve. GLUE involves a sensitivity analysis of how changes in the fracture properties change the well hydraulics through and analysis of the drawdown curve and changes in the cone of depression. This was undertaken by changing the native coal, fracture, and gas parameters to see how changing those parameters changed the match between simulations and the observed well drawdown. Results from the GLUE analysis show how much information is contained in the well drawdown curve for estimating field scale coal and gas generation properties, the fracture geometry, and the proponent characteristics. The results with the multiphase model show a better match to the drawdown than using a single phase model but the differences between the best fit drawdowns were small, and smaller than the difference between the best fit and field data. However, the parameters derived to generate these best fits for each model were very different. We conclude that while satisfactory fits with single phase groundwater models (e.g. MODFLOW, FEFLOW) can be achieved the parameters derived will not be realistic, with potential implications for drawdowns and water yields for gas field modelling. Multiphase models are thus required and we will discuss some of the limitations of TOUGH2 for the CSG problem.

  1. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

  2. Development of a program to fit data to a new logistic model for microbial growth.

    PubMed

    Fujikawa, Hiroshi; Kano, Yoshihiro

    2009-06-01

    Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.

  3. Comparing rainfall patterns between regions in Peninsular Malaysia via a functional data analysis technique

    NASA Astrophysics Data System (ADS)

    Suhaila, Jamaludin; Jemain, Abdul Aziz; Hamdan, Muhammad Fauzee; Wan Zin, Wan Zawiah

    2011-12-01

    SummaryNormally, rainfall data is collected on a daily, monthly or annual basis in the form of discrete observations. The aim of this study is to convert these rainfall values into a smooth curve or function which could be used to represent the continuous rainfall process at each region via a technique known as functional data analysis. Since rainfall data shows a periodic pattern in each region, the Fourier basis is introduced to capture these variations. Eleven basis functions with five harmonics are used to describe the unimodal rainfall pattern for stations in the East while five basis functions which represent two harmonics are needed to describe the rainfall pattern in the West. Based on the fitted smooth curve, the wet and dry periods as well as the maximum and minimum rainfall values could be determined. Different rainfall patterns are observed among the studied regions based on the smooth curve. Using the functional analysis of variance, the test results indicated that there exist significant differences in the functional means between each region. The largest differences in the functional means are found between the East and Northwest regions and these differences may probably be due to the effect of topography and, geographical location and are mostly influenced by the monsoons. Therefore, the same inputs or approaches might not be useful in modeling the hydrological process for different regions.

  4. Conduction and rectification in NbO x - and NiO-based metal-insulator-metal diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osgood, Richard M.; Giardini, Stephen; Carlson, Joel

    2016-09-01

    Conduction and rectification in nanoantenna-coupled NbOx- and NiO-based metal-insulator-metal (MIM) diodes ('nanorectennas') are studied by comparing new theoretical predictions with the measured response of nanorectenna arrays. A new quantum mechanical model is reported and agrees with measurements of current-voltage (I-V) curves, over 10 orders of magnitude in current density, from [NbOx(native)-Nb2O5]- and NiO-based samples with oxide thicknesses in the range of 5-36 nm. The model, which introduces new physics and features, including temperature, electron effective mass, and image potential effects using the pseudobarrier technique, improves upon widely used earlier models, calculates the MIM diode's I-V curve, and predicts quantitatively themore » rectification responsivity of high frequency voltages generated in a coupled nanoantenna array by visible/near-infrared light. The model applies both at the higher frequencies, when high-energy photons are incident, and at lower frequencies, when the formula for classical rectification, involving derivatives of the I-V curve, may be used. The rectified low-frequency direct current is well-predicted in this work's model, but not by fitting the experimentally measured I-V curve with a polynomial or by using the older Simmons model (as shown herein). By fitting the measured I-V curves with our model, the barrier heights in Nb-(NbOx(native)-Nb2O5)-Pt and Ni-NiO-Ti/Ag diodes are found to be 0.41/0.77 and 0.38/0.39 eV, respectively, similar to literature reports, but with effective mass much lower than the free space value. The NbOx (native)-Nb2O5 dielectric properties improve, and the effective Pt-Nb2O5 barrier height increases as the oxide thickness increases. An observation of direct current of ~4 nA for normally incident, focused 514 nm continuous wave laser beams are reported, similar in magnitude to recent reports. This measured direct current is compared to the prediction for rectified direct current, given by the rectification responsivity, calculated from the I-V curve times input power.« less

  5. A Multi-year Multi-passband CCD Photometric Study of the W UMa Binary EQ Tauri

    NASA Astrophysics Data System (ADS)

    Alton, K. B.

    2009-12-01

    A revised ephemeris and updated orbital period for EQ Tau have been determined from newly acquired (2007-2009) CCD-derived photometric data. A Roche-type model based on the Wilson-Devinney code produced simultaneous theoretical fits of light curve data in three passbands by invoking cold spots on the primary component. These new model fits, along with similar light curve data for EQ Tau collected during the previous six seasons (2000-2006), provided a rare opportunity to follow the seasonal appearance of star spots on a W UMa binary system over nine consecutive years. Fixed values for q, ?1,2, T1, T2, and i based upon the mean of eleven separately determined model fits produced for this system are hereafter proposed for future light curve modeling of EQ Tau. With the exception of the 2001 season all other light curves produced since then required a spotted solution to address the flux asymmetry exhibited by this binary system at Max I and Max II. At least one cold spot on the primary appears in seven out of twelve light curves for EQ Tau produced over the last nine years, whereas in six instances two cold spots on the primary star were invoked to improve the model fit. Solutions using a hot spot were less common and involved positioning a single spot on the primary constituent during the 2001-2002, 2002-2003, and 2005-2006 seasons.

  6. Method and apparatus for air-coupled transducer

    NASA Technical Reports Server (NTRS)

    Song, Junho (Inventor); Chimenti, Dale E. (Inventor)

    2010-01-01

    An air-coupled transducer includes a ultrasonic transducer body having a radiation end with a backing fixture at the radiation end. There is a flexible backplate conformingly fit to the backing fixture and a thin membrane (preferably a metallized polymer) conformingly fit to the flexible backplate. In one embodiment, the backing fixture is spherically curved and the flexible backplate is spherically curved. The flexible backplate is preferably patterned with pits or depressions.

  7. Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.

    PubMed Central

    Franco, R; Aran, J M; Canela, E I

    1991-01-01

    A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914

  8. Apparatus and method for qualitative and quantitative measurements of optical properties of turbid media using frequency-domain photon migration

    DOEpatents

    Tromberg, B.J.; Tsay, T.T.; Berns, M.W.; Svaasand, L.O.; Haskell, R.C.

    1995-06-13

    Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid. 14 figs.

  9. Apparatus and method for qualitative and quantitative measurements of optical properties of turbid media using frequency-domain photon migration

    DOEpatents

    Tromberg, Bruce J.; Tsay, Tsong T.; Berns, Michael W.; Svaasand, Lara O.; Haskell, Richard C.

    1995-01-01

    Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid.

  10. Status of Cycle 23 Forecasts

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2000-01-01

    A number of techniques for predicting solar activity on a solar cycle time scale are identified, described, and tested with historical data. Some techniques, e.g,, regression and curve-fitting, work well as solar activity approaches maximum and provide a month- by-month description of future activity, while others, e.g., geomagnetic precursors, work well near solar minimum but provide an estimate only of the amplitude of the cycle. A synthesis of different techniques is shown to provide a more accurate and useful forecast of solar cycle activity levels. A combination of two uncorrelated geomagnetic precursor techniques provides the most accurate prediction for the amplitude of a solar activity cycle at a time well before activity minimum. This precursor method gave a smoothed sunspot number maximum of 154+21 for cycle 23. A mathematical function dependent upon the time of cycle initiation and the cycle amplitude then describes the level of solar activity for the complete cycle. As the time of cycle maximum approaches a better estimate of the cycle activity is obtained by including the fit between recent activity levels and this function. This Combined Solar Cycle Activity Forecast now gives a smoothed sunspot maximum of 140+20 for cycle 23. The success of the geomagnetic precursors in predicting future solar activity suggests that solar magnetic phenomena at latitudes above the sunspot activity belts are linked to solar activity, which occurs many years later in the lower latitudes.

  11. Fitting Photometry of Blended Microlensing Events

    NASA Astrophysics Data System (ADS)

    Thomas, Christian L.; Griest, Kim

    2006-03-01

    We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.

  12. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  13. Interactive contour delineation and refinement in treatment planning of image‐guided radiation therapy

    PubMed Central

    Zhou, Wu

    2014-01-01

    The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846

  14. Dust Attenuation Curves in the Local Universe: Demographics and New Laws for Star-forming Galaxies and High-redshift Analogs

    NASA Astrophysics Data System (ADS)

    Salim, Samir; Boquien, Médéric; Lee, Janice C.

    2018-05-01

    We study the dust attenuation curves of 230,000 individual galaxies in the local universe, ranging from quiescent to intensely star-forming systems, using GALEX, SDSS, and WISE photometry calibrated on the Herschel ATLAS. We use a new method of constraining SED fits with infrared luminosity (SED+LIR fitting), and parameterized attenuation curves determined with the CIGALE SED-fitting code. Attenuation curve slopes and UV bump strengths are reasonably well constrained independently from one another. We find that {A}λ /{A}V attenuation curves exhibit a very wide range of slopes that are on average as steep as the curve slope of the Small Magellanic Cloud (SMC). The slope is a strong function of optical opacity. Opaque galaxies have shallower curves—in agreement with recent radiative transfer models. The dependence of slopes on the opacity produces an apparent dependence on stellar mass: more massive galaxies have shallower slopes. Attenuation curves exhibit a wide range of UV bump amplitudes, from none to Milky Way (MW)-like, with an average strength one-third that of the MW bump. Notably, local analogs of high-redshift galaxies have an average curve that is somewhat steeper than the SMC curve, with a modest UV bump that can be, to first order, ignored, as its effect on the near-UV magnitude is 0.1 mag. Neither the slopes nor the strengths of the UV bump depend on gas-phase metallicity. Functional forms for attenuation laws are presented for normal star-forming galaxies, high-z analogs, and quiescent galaxies. We release the catalog of associated star formation rates and stellar masses (GALEX–SDSS–WISE Legacy Catalog 2).

  15. Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2

    NASA Astrophysics Data System (ADS)

    Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.

    2017-12-01

    In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.

  16. Identification of saline water intrusion in part of Cauvery deltaic region, Tamil Nadu, Southern India: using GIS and VES methods

    NASA Astrophysics Data System (ADS)

    Gnanachandrasamy, G.; Ramkumar, T.; Venkatramanan, S.; Chung, S. Y.; Vasudevan, S.

    2016-06-01

    We use electrical resistivity data arrayed in a 2715 km2 region with 30 locations to identify the saline water intrusion zone in part of Cauvery deltaic region, offshore Eastern India. From this dataset we are able to derive information on groundwater quality, thickness of aquifer zone, structural and stratigraphic conditions relevant to groundwater conditions, and permeability of aquifer systems. A total of 30 vertical electrode soundings (VES) were carried out by Schlumberger electrode arrangement to indicate complete lithology of this region using curve matching techniques. The electrical soundings exhibited that H and HK type curves were suitable for 16 shallow locations, and QH, KQ, K, KH, QQ, and HA curves were fit for other location. Low resistivity values suggested that saline water intrusion occurred in this region. According to final GIS map, most of the region was severely affected by seawater intrusion due to the use of over-exploitation of groundwater.The deteriorated groundwater resources in this coastal region should raise environmental and health concerns.

  17. Drop shape visualization and contact angle measurement on curved surfaces.

    PubMed

    Guilizzoni, Manfredo

    2011-12-01

    The shape and contact angles of drops on curved surfaces is experimentally investigated. Image processing, spline fitting and numerical integration are used to extract the drop contour in a number of cross-sections. The three-dimensional surfaces which describe the surface-air and drop-air interfaces can be visualized and a simple procedure to determine the equilibrium contact angle starting from measurements on curved surfaces is proposed. Contact angles on flat surfaces serve as a reference term and a procedure to measure them is proposed. Such procedure is not as accurate as the axisymmetric drop shape analysis algorithms, but it has the advantage of requiring only a side view of the drop-surface couple and no further information. It can therefore be used also for fluids with unknown surface tension and there is no need to measure the drop volume. Examples of application of the proposed techniques for distilled water drops on gemstones confirm that they can be useful for drop shape analysis and contact angle measurement on three-dimensional sculptured surfaces. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Recognition of 3-D symmetric objects from range images in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.

  19. Estimation of Pulse Transit Time as a Function of Blood Pressure Using a Nonlinear Arterial Tube-Load Model.

    PubMed

    Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna

    2017-07-01

    pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.

  20. Viability estimation of pepper seeds using time-resolved photothermal signal characterization

    NASA Astrophysics Data System (ADS)

    Kim, Ghiseok; Kim, Geon-Hee; Lohumi, Santosh; Kang, Jum-Soon; Cho, Byoung-Kwan

    2014-11-01

    We used infrared thermal signal measurement system and photothermal signal and image reconstruction techniques for viability estimation of pepper seeds. Photothermal signals from healthy and aged seeds were measured for seven periods (24, 48, 72, 96, 120, 144, and 168 h) using an infrared camera and analyzed by a regression method. The photothermal signals were regressed using a two-term exponential decay curve with two amplitudes and two time variables (lifetime) as regression coefficients. The regression coefficients of the fitted curve showed significant differences for each seed groups, depending on the aging times. In addition, the viability of a single seed was estimated by imaging of its regression coefficient, which was reconstructed from the measured photothermal signals. The time-resolved photothermal characteristics, along with the regression coefficient images, can be used to discriminate the aged or dead pepper seeds from the healthy seeds.

  1. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  2. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  3. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  4. Investigation of skin structures based on infrared wave parameter indirect microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan

    2017-02-01

    Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.

  5. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2004-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  6. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.

    2003-01-01

    The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.

  7. FTIR Analysis of Functional Groups in Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Shokri, S. M.; McKenzie, G.; Dransfield, T. J.

    2012-12-01

    Secondary organic aerosols (SOA) are suspensions of particulate matter composed of compounds formed from chemical reactions of organic species in the atmosphere. Atmospheric particulate matter can have impacts on climate, the environment and human health. Standardized techniques to analyze the characteristics and composition of complex secondary organic aerosols are necessary to further investigate the formation of SOA and provide a better understanding of the reaction pathways of organic species in the atmosphere. While Aerosol Mass Spectrometry (AMS) can provide detailed information about the elemental composition of a sample, it reveals little about the chemical moieties which make up the particles. This work probes aerosol particles deposited on Teflon filters using FTIR, based on the protocols of Russell, et al. (Journal of Geophysical Research - Atmospheres, 114, 2009) and the spectral fitting algorithm of Takahama, et al (submitted, 2012). To validate the necessary calibration curves for the analysis of complex samples, primary aerosols of key compounds (e.g., citric acid, ammonium sulfate, sodium benzoate) were generated, and the accumulated masses of the aerosol samples were related to their IR absorption intensity. These validated calibration curves were then used to classify and quantify functional groups in SOA samples generated in chamber studies by MIT's Kroll group. The fitting algorithm currently quantifies the following functionalities: alcohols, alkanes, alkenes, amines, aromatics, carbonyls and carboxylic acids.

  8. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2002-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  9. Phytoplankton productivity in relation to light intensity: A simple equation

    USGS Publications Warehouse

    Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.

    1987-01-01

    A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.

  10. Nuclear reactor descriptions for space power systems analysis

    NASA Technical Reports Server (NTRS)

    Mccauley, E. W.; Brown, N. J.

    1972-01-01

    For the small, high performance reactors required for space electric applications, adequate neutronic analysis is of crucial importance, but in terms of computational time consumed, nuclear calculations probably yield the least amount of detail for mission analysis study. It has been found possible, after generation of only a few designs of a reactor family in elaborate thermomechanical and nuclear detail to use simple curve fitting techniques to assure desired neutronic performance while still performing the thermomechanical analysis in explicit detail. The resulting speed-up in computation time permits a broad detailed examination of constraints by the mission analyst.

  11. Nomarski differential interference contrast microscopy for surface slope measurements: an examination of techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, J.S.; Gordon, R.L.; Lessor, D.L.

    1981-08-01

    Alternate measurement and data analysis procedures are discussed and compared for the application of reflective Nomarski differential interference contrast microscopy for the determination of surface slopes. The discussion includes the interpretation of a previously reported iterative procedure using the results of a detailed optical model and the presentation of a new procedure based on measured image intensity extrema. Surface slope determinations from these procedures are presented and compared with results from a previously reported curve fit analysis of image intensity data. The accuracy and advantages of the different procedures are discussed.

  12. Limb-darkening and the structure of the Jovian atmosphere

    NASA Technical Reports Server (NTRS)

    Newman, W. I.; Sagan, C.

    1978-01-01

    By observing the transit of various cloud features across the Jovian disk, limb-darkening curves were constructed for three regions in the 4.6 to 5.1 mu cm band. Several models currently employed in describing the radiative or dynamical properties of planetary atmospheres are here examined to understand their implications for limb-darkening. The statistical problem of fitting these models to the observed data is reviewed and methods for applying multiple regression analysis are discussed. Analysis of variance techniques are introduced to test the viability of a given physical process as a cause of the observed limb-darkening.

  13. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    NASA Astrophysics Data System (ADS)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  14. Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Everhart, Joel L.

    1987-01-01

    This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.

  15. Determination of uronic acids in isolated hemicelluloses from kenaf using diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method.

    PubMed

    Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G

    2004-02-01

    Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.

  16. Global geometric torsion estimation in adolescent idiopathic scoliosis.

    PubMed

    Kadoury, Samuel; Shen, Jesse; Parent, Stefan

    2014-04-01

    Several attempts have been made to measure geometrical torsion in adolescent idiopathic scoliosis (AIS) and quantify the three-dimensional (3D) deformation of the spine. However, these approaches are sensitive to imprecisions in the 3D modeling of the anatomy and can only capture the effect locally at the vertebrae, ignoring the global effect at the regional level and thus have never been widely used to follow the progression of a deformity. The goal of this work was to evaluate the relevance of a novel geometric torsion descriptor based on a parametric modeling of the spinal curve as a 3D index of scoliosis. First, an image-based approach anchored on prior statistical distributions is used to reconstruct the spine in 3D from biplanar X-rays. Geometric torsion measuring the twisting effect of the spine is then estimated using a technique that approximates local arc-lengths with parametric curve fitting centered at the neutral vertebra in different spinal regions. We first evaluated the method with simulated experiments, demonstrating the method's robustness toward added noise and reconstruction inaccuracies. A pilot study involving 65 scoliotic patients exhibiting different types of deformities was also conducted. Results show the method is able to discriminate between different types of deformation based on this novel 3D index evaluated in the main thoracic and thoracolumbar/lumbar regions. This demonstrates that geometric torsion modeled by parametric spinal curve fitting is a robust tool that can be used to quantify the 3D deformation of AIS and possibly exploited as an index to classify the 3D shape.

  17. Observational evidence of dust evolution in galactic extinction curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cecchi-Pestellini, Cesare; Casu, Silvia; Mulas, Giacomo

    Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds.more » Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.« less

  18. UTM, a universal simulator for lightcurves of transiting systems

    NASA Astrophysics Data System (ADS)

    Deeg, Hans

    2009-02-01

    The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. Applications of UTM to date have been mainly in the generation of light-curves for the testing of detection algorithms. For the preparation of such test for the Corot Mission, a special version has been used to generate multicolour light-curves in Corot's passbands. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.

  19. The effect of semirigid dressings on below-knee amputations.

    PubMed

    MacLean, N; Fick, G H

    1994-07-01

    The effect of using semirigid dressings (SRDs) on the residual limb of individuals who have had below-knee amputations as a consequence of peripheral vascular disease was investigated, with the primary question being: Does the time to readiness for prosthetic fitting for patients treated with the SRDs differ from that of patients treated with soft dressings? Forty patients entered the study and were alternately assigned to one of two groups. Nineteen patients were assigned to the SRD group, and 21 patients were assigned to the soft dressing group. The time from surgery to readiness for prosthetic fitting was recorded for each patient. Kaplan-Meier survival curves were generated for each group, and the results were analyzed with the log-rank test. There was a difference between the two curves, and an examination of the curves suggests that the expected time to readiness for prosthetic fitting for patients treated with the SRDs would be less than half that of patients treated with soft dressings. The results suggest that a patient may be ready for prosthetic fitting sooner if treated with SRDs instead of soft dressings.

  20. Modelling the distribution of hard seabed using calibrated multibeam acoustic backscatter data in a tropical, macrotidal embayment: Darwin Harbour, Australia

    NASA Astrophysics Data System (ADS)

    Siwabessy, P. Justy W.; Tran, Maggie; Picard, Kim; Brooke, Brendan P.; Huang, Zhi; Smit, Neil; Williams, David K.; Nicholas, William A.; Nichol, Scott L.; Atkinson, Ian

    2018-06-01

    Spatial information on the distribution of seabed substrate types in high use coastal areas is essential to support their effective management and environmental monitoring. For Darwin Harbour, a rapidly developing port in northern Australia, the distribution of hard substrate is poorly documented but known to influence the location and composition of important benthic biological communities (corals, sponges). In this study, we use angular backscatter response curves to model the distribution of hard seabed in the subtidal areas of Darwin Harbour. The angular backscatter response curve data were extracted from multibeam sonar data and analysed against backscatter intensity for sites observed from seabed video to be representative of "hard" seabed. Data from these sites were consolidated into an "average curve", which became a reference curve that was in turn compared to all other angular backscatter response curves using the Kolmogorov-Smirnov goodness-of-fit. The output was used to generate interpolated spatial predictions of the probability of hard seabed ( p-hard) and derived hard seabed parameters for the mapped area of Darwin Harbour. The results agree well with the ground truth data with an overall classification accuracy of 75% and an area under curve measure of 0.79, and with modelled bed shear stress for the Harbour. Limitations of this technique are discussed with attention to discrepancies between the video and acoustic results, such as in areas where sediment forms a veneer over hard substrate.

  1. Thermal Conductivity of a Nanoscale Yttrium Iron Garnet Thin-Film Prepared by the Sol-Gel Process

    PubMed Central

    2017-01-01

    The thermal conductivity of a nanoscale yttrium iron garnet (Y3Fe5O12, YIG) thin-film prepared by a sol-gel method was evaluated using the ultrafast pump-probe technique in the present study. The thermoreflectance change on the surface of a 250 nm thick YIG film, induced by the irradiation of femtosecond laser pulses, was measured, and curve fitting of a numerical solution for the transient heat conduction equation to the experimental data was performed using the finite difference method in order to extract the thermal property. Results show that the film’s thermal conductivity is 22–83% higher than the properties of bulk YIG materials prepared by different fabrication techniques, reflecting the microstructural characteristics and quality of the film. PMID:28858249

  2. Open loop model for WDM links

    NASA Astrophysics Data System (ADS)

    D, Meena; Francis, Fredy; T, Sarath K.; E, Dipin; Srinivas, T.; K, Jayasree V.

    2014-10-01

    Wavelength Division Multiplexing (WDM) techniques overfibrelinks helps to exploit the high bandwidth capacity of single mode fibres. A typical WDM link consisting of laser source, multiplexer/demultiplexer, amplifier and detectoris considered for obtaining the open loop gain model of the link. The methodology used here is to obtain individual component models using mathematical and different curve fitting techniques. These individual models are then combined to obtain the WDM link model. The objective is to deduce a single variable model for the WDM link in terms of input current to system. Thus it provides a black box solution for a link. The Root Mean Square Error (RMSE) associated with each of the approximated models is given for comparison. This will help the designer to select the suitable WDM link model during a complex link design.

  3. Exploring the optimum step size for defocus curves.

    PubMed

    Wolffsohn, James S; Jinabhai, Amit N; Kingsnorth, Alec; Sheppard, Amy L; Naroo, Shehzad A; Shah, Sunil; Buckhurst, Phillip; Hall, Lee A; Young, Graeme

    2013-06-01

    To evaluate the effect of reducing the number of visual acuity measurements made in a defocus curve on the quality of data quantified. Midland Eye, Solihull, United Kingdom. Evaluation of a technique. Defocus curves were constructed by measuring visual acuity on a distance logMAR letter chart, randomizing the test letters between lens presentations. The lens powers evaluated ranged between +1.50 diopters (D) and -5.00 D in 0.50 D steps, which were also presented in a randomized order. Defocus curves were measured binocularly with the Tecnis diffractive, Rezoom refractive, Lentis rotationally asymmetric segmented (+3.00 D addition [add]), and Finevision trifocal multifocal intraocular lenses (IOLs) implanted bilaterally, and also for the diffractive IOL and refractive or rotationally asymmetric segmented (+3.00 D and +1.50 D adds) multifocal IOLs implanted contralaterally. Relative and absolute range of clear-focus metrics and area metrics were calculated for curves fitted using 0.50 D, 1.00 D, and 1.50 D steps and a near add-specific profile (ie, distance, half the near add, and the full near-add powers). A significant difference in simulated results was found in at least 1 of the relative or absolute range of clear-focus or area metrics for each of the multifocal designs examined when the defocus-curve step size was increased (P<.05). Faster methods of capturing defocus curves from multifocal IOL designs appear to distort the metric results and are therefore not valid. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  4. Derivation of error sources for experimentally derived heliostat shapes

    NASA Astrophysics Data System (ADS)

    Cumpston, Jeff; Coventry, Joe

    2017-06-01

    Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.

  5. Efficient and reliable characterization of the corticospinal system using transcranial magnetic stimulation.

    PubMed

    Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark

    2014-06-01

    The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.

  6. Assessment of the Radiation-Equivalent of Chemotherapy Contributions in 1-Phase Radio-chemotherapy Treatment of Muscle-Invasive Bladder Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plataniotis, George A., E-mail: george.plataniotis@nhs.net; Dale, Roger G.

    2014-03-15

    Purpose: To estimate the radiation equivalent of the chemotherapy contribution to observed complete response rates in published results of 1-phase radio-chemotherapy of muscle-invasive bladder cancer. Methods and Materials: A standard logistic dose–response curve was fitted to data from radiation therapy-alone trials and then used as the platform from which to quantify the chemotherapy contribution in 1-phase radio-chemotherapy trials. Two possible mechanisms of chemotherapy effect were assumed (1) a fixed radiation-independent contribution to local control; or (2) a fixed degree of chemotherapy-induced radiosensitization. A combination of both mechanisms was also considered. Results: The respective best-fit values of the independent chemotherapy-induced completemore » response (CCR) and radiosensitization (s) coefficients were 0.40 (95% confidence interval −0.07 to 0.87) and 1.30 (95% confidence interval 0.86-1.70). Independent chemotherapy effect was slightly favored by the analysis, and the derived CCR value was consistent with reports of pathologic complete response rates seen in neoadjuvant chemotherapy-alone treatments of muscle-invasive bladder cancer. The radiation equivalent of the CCR was 36.3 Gy. Conclusion: Although the data points in the analyzed radio-chemotherapy studies are widely dispersed (largely on account of the diverse range of chemotherapy schedules used), it is nonetheless possible to fit plausible-looking response curves. The methodology used here is based on a standard technique for analyzing dose-response in radiation therapy-alone studies and is capable of application to other mixed-modality treatment combinations involving radiation therapy.« less

  7. Transient excitation and data processing techniques employing the fast fourier transform for aeroelastic testing

    NASA Technical Reports Server (NTRS)

    Jennings, W. P.; Olsen, N. L.; Walter, M. J.

    1976-01-01

    The development of testing techniques useful in airplane ground resonance testing, wind tunnel aeroelastic model testing, and airplane flight flutter testing is presented. Included is the consideration of impulsive excitation, steady-state sinusoidal excitation, and random and pseudorandom excitation. Reasons for the selection of fast sine sweeps for transient excitation are given. The use of the fast fourier transform dynamic analyzer (HP-5451B) is presented, together with a curve fitting data process in the Laplace domain to experimentally evaluate values of generalized mass, model frequencies, dampings, and mode shapes. The effects of poor signal to noise ratios due to turbulence creating data variance are discussed. Data manipulation techniques used to overcome variance problems are also included. The experience is described that was gained by using these techniques since the early stages of the SST program. Data measured during 747 flight flutter tests, and SST, YC-14, and 727 empennage flutter model tests are included.

  8. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  9. A method to characterize average cervical spine ligament response based on raw data sets for implementation into injury biomechanics models.

    PubMed

    Mattucci, Stephen F E; Cronin, Duane S

    2015-01-01

    Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Forgetting Curves: Implications for Connectionist Models

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2002-01-01

    Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…

  11. Nonlinear Growth Models in M"plus" and SAS

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam

    2009-01-01

    Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…

  12. On the Early-Time Excess Emission in Hydrogen-Poor Superluminous Supernovae

    NASA Technical Reports Server (NTRS)

    Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; De Cia, Annalisa; Perley, Daniel A.; Quimby, Robert M.; Waldman, Roni; Sullivan, Mark; Yan, Lin; Ofek, Eran O.; hide

    2017-01-01

    We present the light curves of the hydrogen-poor super-luminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (approximately 10 days) and brightness relative to the main peak (23 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (greater than 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.

  13. On The Early-Time Excess Emission In Hydrogen-Poor Superluminous Supernovae

    DOE PAGES

    Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; ...

    2017-01-18

    Here, we present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (~10 days) and brightness relative to the main peak (2-3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration ( > 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of amore » different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less

  14. ON THE EARLY-TIME EXCESS EMISSION IN HYDROGEN-POOR SUPERLUMINOUS SUPERNOVAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay

    2017-01-20

    We present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (∼10 days) and brightness relative to the main peak (2–3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (>30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. Wemore » construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of {sup 56}Ni and {sup 56}Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less

  15. Experimental study of water desorption isotherms and thin-layer convective drying kinetics of bay laurel leaves

    NASA Astrophysics Data System (ADS)

    Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed

    2016-12-01

    The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.

  16. The relationship between offspring size and fitness: integrating theory and empiricism.

    PubMed

    Rollinson, Njal; Hutchings, Jeffrey A

    2013-02-01

    How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.

  17. HYDRORECESSION: A toolbox for streamflow recession analysis

    NASA Astrophysics Data System (ADS)

    Arciniega, S.

    2015-12-01

    Streamflow recession curves are hydrological signatures allowing to study the relationship between groundwater storage and baseflow and/or low flows at the catchment scale. Recent studies have showed that streamflow recession analysis can be quite sensitive to the combination of different models, extraction techniques and parameter estimation methods. In order to better characterize streamflow recession curves, new methodologies combining multiple approaches have been recommended. The HYDRORECESSION toolbox, presented here, is a Matlab graphical user interface developed to analyse streamflow recession time series with the support of different tools allowing to parameterize linear and nonlinear storage-outflow relationships through four of the most useful recession models (Maillet, Boussinesq, Coutagne and Wittenberg). The toolbox includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error) and three different methods to extract hydrograph recessions segments (Vogel, Brutsaert and Aksoy). In addition, the toolbox has a module that separates the baseflow component from the observed hydrograph using the inverse reservoir algorithm. Potential applications provided by HYDRORECESSION include model parameter analysis, hydrological regionalization and classification, baseflow index estimates, catchment-scale recharge and low-flows modelling, among others. HYDRORECESSION is freely available for non-commercial and academic purposes.

  18. Feasibility of Rapid Multitracer PET Tumor Imaging

    NASA Astrophysics Data System (ADS)

    Kadrmas, D. J.; Rust, T. C.

    2005-10-01

    Positron emission tomography (PET) can characterize different aspects of tumor physiology using various tracers. PET scans are usually performed using only one tracer since there is no explicit signal for distinguishing multiple tracers. We tested the feasibility of rapidly imaging multiple PET tracers using dynamic imaging techniques, where the signals from each tracer are separated based upon differences in tracer half-life, kinetics, and distribution. Time-activity curve populations for FDG, acetate, ATSM, and PTSM were simulated using appropriate compartment models, and noisy dual-tracer curves were computed by shifting and adding the single-tracer curves. Single-tracer components were then estimated from dual-tracer data using two methods: principal component analysis (PCA)-based fits of single-tracer components to multitracer data, and parallel multitracer compartment models estimating single-tracer rate parameters from multitracer time-activity curves. The PCA analysis found that there is information content present for separating multitracer data, and that tracer separability depends upon tracer kinetics, injection order and timing. Multitracer compartment modeling recovered rate parameters for individual tracers with good accuracy but somewhat higher statistical uncertainty than single-tracer results when the injection delay was >10 min. These approaches to processing rapid multitracer PET data may potentially provide a new tool for characterizing multiple aspects of tumor physiology in vivo.

  19. Molecular dynamics simulations of the melting curve of NiAl alloy under pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wenjin; Peng, Yufeng; Liu, Zhongli, E-mail: zhongliliu@yeah.net

    2014-05-15

    The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fitting the well-known Simon equation to our melting data, we yielded the melting curves for NiAl: 1783(1 + P/9.801){sup 0.298} (one-phase approach), 1850(1 + P/12.806){sup 0.357} (two-phase approach).more » The good agreement of the resulting equation of states and the zero-pressure melting point (calc., 1850 ± 25 K, exp., 1911 K) with experiment proved the correctness of these results. These melting data complemented the absence of experimental high-pressure melting of NiAl. To check the transferability of this EAM potential, we have also predicted the melting curves of pure nickel and pure aluminum. Results show the calculated melting point of Nickel agrees well with experiment at zero pressure, while the melting point of aluminum is slightly higher than experiment.« less

  20. A modified CoRoT detrend algorithm and the discovery of a new planetary companion

    NASA Astrophysics Data System (ADS)

    Boufleur, Rodrigo C.; Emilio, Marcelo; Janot-Pacheco, Eduardo; Andrade, Laerte; Ferraz-Mello, Sylvio; do Nascimento, José-Dias, Jr.; de La Reza, Ramiro

    2018-01-01

    We present MCDA, a modification of the COnvection ROtation and planetary Transits (CoRoT) detrend algorithm (CDA) suitable to detrend chromatic light curves. By means of robust statistics and better handling of short-term variability, the implementation decreases the systematic light-curve variations and improves the detection of exoplanets when compared with the original algorithm. All CoRoT chromatic light curves (a total of 65 655) were analysed with our algorithm. Dozens of new transit candidates and all previously known CoRoT exoplanets were rediscovered in those light curves using a box-fitting algorithm. For three of the new cases, spectroscopic measurements of the candidates' host stars were retrieved from the ESO Science Archive Facility and used to calculate stellar parameters and, in the best cases, radial velocities. In addition to our improved detrend technique, we announce the discovery of a planet that orbits a 0.79_{-0.09}^{+0.08} R⊙ star with a period of 6.718 37 ± 0.000 01 d and has 0.57_{-0.05}^{+0.06} RJ and 0.15 ± 0.10 MJ. We also present the analysis of two cases in which parameters found suggest the existence of possible planetary companions.

  1. Plasma breakdown in a capacitively-coupled radiofrequency argon discharge

    NASA Astrophysics Data System (ADS)

    Smith, H. B.; Charles, C.; Boswell, R. W.

    1998-10-01

    Low pressure, capacitively-coupled rf discharges are widely used in research and commercial ventures. Understanding of the non-equilibrium processes which occur in these discharges during breakdown is of interest, both for industrial applications and for a deeper understanding of fundamental plasma behaviour. The voltage required to breakdown the discharge V_brk has long been known to be a strong function of the product of the neutral gas pressure and the electrode seperation (pd). This paper investigates the dependence of V_brk on pd in rf systems using experimental, computational and analytic techniques. Experimental measurements of V_brk are made for pressures in the range 1 -- 500 mTorr and electrode separations of 2 -- 20 cm. A Paschen-style curve for breakdown in rf systems is developed which has the minimum breakdown voltage at a much smaller pd value, and breakdown voltages which are significantly lower overall, than for Paschen curves obtained from dc discharges. The differences between the two systems are explained using a simple analytic model. A Particle-in-Cell simulation is used to investigate a similar pd range and examine the effect of the secondary emission coefficient on the rf breakdown curve, particularly at low pd values. Analytic curves are fitted to both experimental and simulation results.

  2. On the Methodology of Studying Aging in Humans

    DTIC Science & Technology

    1961-01-01

    prediction of death rates The relation of death rate to age has been extensively studied for over 100 years. As an illustration recent death rates for...log death rates appear to be linear, the simpler Gompertz curve fits closely. While on this subject of the Makeham-Gompertz function, it should be...Makeham-Gompertz curve to 5 year age specific death rates . Each fitting provided estimates of the parameters a, {j, and log c for each of the five year

  3. Statistically generated weighted curve fit of residual functions for modal analysis of structures

    NASA Technical Reports Server (NTRS)

    Bookout, P. S.

    1995-01-01

    A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.

  4. Waveform fitting and geometry analysis for full-waveform lidar feature extraction

    NASA Astrophysics Data System (ADS)

    Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu

    2016-10-01

    This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.

  5. Graphical approach to assess the soil fertility evaluation model validity for rice (case study: southern area of Merapi Mountain, Indonesia)

    NASA Astrophysics Data System (ADS)

    Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo

    2018-03-01

    Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.

  6. Methods for Performing Survival Curve Quality-of-Life Assessments.

    PubMed

    Sumner, Walton; Ding, Eric; Fischer, Irene D; Hagen, Michael D

    2014-08-01

    Many medical decisions involve an implied choice between alternative survival curves, typically with differing quality of life. Common preference assessment methods neglect this structure, creating some risk of distortions. Survival curve quality-of-life assessments (SQLA) were developed from Gompertz survival curves fitting the general population's survival. An algorithm was developed to generate relative discount rate-utility (DRU) functions from a standard survival curve and health state and an equally attractive alternative curve and state. A least means squared distance algorithm was developed to describe how nearly 3 or more DRU functions intersect. These techniques were implemented in a program called X-Trade and tested. SQLA scenarios can portray realistic treatment choices. A side effect scenario portrays one prototypical choice, to extend life while experiencing some loss, such as an amputation. A risky treatment scenario portrays procedures with an initial mortality risk. A time trade scenario mimics conventional time tradeoffs. Each SQLA scenario yields DRU functions with distinctive shapes, such as sigmoid curves or vertical lines. One SQLA can imply a discount rate or utility if the other value is known and both values are temporally stable. Two SQLA exercises imply a unique discount rate and utility if the inferred DRU functions intersect. Three or more SQLA results can quantify uncertainty or inconsistency in discount rate and utility estimates. Pilot studies suggested that many subjects could learn to interpret survival curves and do SQLA. SQLA confuse some people. Compared with SQLA, standard gambles quantify very low utilities more easily, and time tradeoffs are simpler for high utilities. When discount rates approach zero, time tradeoffs are as informative and easier to do than SQLA. SQLA may complement conventional utility assessment methods. © The Author(s) 2014.

  7. Physical fitness reference standards in fibromyalgia: The al-Ándalus project.

    PubMed

    Álvarez-Gallardo, I C; Carbonell-Baeza, A; Segura-Jiménez, V; Soriano-Maldonado, A; Intemann, T; Aparicio, V A; Estévez-López, F; Camiletti-Moirón, D; Herrador-Colmenero, M; Ruiz, J R; Delgado-Fernández, M; Ortega, F B

    2017-11-01

    We aimed (1) to report age-specific physical fitness levels in people with fibromyalgia of a representative sample from Andalusia; and (2) to compare the fitness levels of people with fibromyalgia with non-fibromyalgia controls. This cross-sectional study included 468 (21 men) patients with fibromyalgia and 360 (55 men) controls. The fibromyalgia sample was geographically representative from southern Spain. Physical fitness was assessed with the Senior Fitness Test battery plus the handgrip test. We applied the Generalized Additive Model for Location, Scale and Shape to calculate percentile curves for women and fitted mean curves using a linear regression for men. Our results show that people with fibromyalgia reached worse performance in all fitness tests than controls (P < 0.001) in all age ranges (P < 0.001). This study provides a comprehensive description of age-specific physical fitness levels among patients with fibromyalgia and controls in a large sample of patients with fibromyalgia from southern of Spain. Physical fitness levels of people with fibromyalgia from Andalusia are very low in comparison with age-matched healthy controls. This information could be useful to correctly interpret physical fitness assessments and helping health care providers to identify individuals at risk for losing physical independence. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Corrections of arterial input function for dynamic H215O PET to assess perfusion of pelvic tumours: arterial blood sampling versus image extraction

    NASA Astrophysics Data System (ADS)

    Lüdemann, L.; Sreenivasa, G.; Michel, R.; Rosner, C.; Plotkin, M.; Felix, R.; Wust, P.; Amthauer, H.

    2006-06-01

    Assessment of perfusion with 15O-labelled water (H215O) requires measurement of the arterial input function (AIF). The arterial time activity curve (TAC) measured using the peripheral sampling scheme requires corrections for delay and dispersion. In this study, parametrizations with and without arterial spillover correction for fitting of the tissue curve are evaluated. Additionally, a completely noninvasive method for generation of the AIF from a dynamic positron emission tomography (PET) acquisition is applied to assess perfusion of pelvic tumours. This method uses a volume of interest (VOI) to extract the TAC from the femoral artery. The VOI TAC is corrected for spillover using a separate tissue TAC and for recovery by determining the recovery coefficient on a coregistered CT data set. The techniques were applied in five patients with pelvic tumours who underwent a total of 11 examinations. Delay and dispersion correction of the blood TAC without arterial spillover correction yielded in seven examinations solutions inconsistent with physiology. Correction of arterial spillover increased the fitting accuracy and yielded consistent results in all patients. Generation of an AIF from PET image data was investigated as an alternative to arterial blood sampling and was shown to have an intrinsic potential to determine the AIF noninvasively and reproducibly. The AIF extracted from a VOI in a dynamic PET scan was similar in shape to the blood AIF but yielded significantly higher tissue perfusion values (mean of 104.0 ± 52.0%) and lower partition coefficients (-31.6 ± 24.2%). The perfusion values and partition coefficients determined with the VOI technique have to be corrected in order to compare the results with those of studies using a blood AIF.

  9. Radiative Heating Methodology for the Huygens Probe

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth

    2007-01-01

    The radiative heating environment for the Huygens probe near peak heating conditions for Titan entry is investigated in this paper. The task of calculating the radiation-coupled flowfield, accounting for non-Boltzmann and non-optically thin radiation, is simplified to a rapid yet accurate calculation. This is achieved by using the viscous-shock layer (VSL) technique for the stagnation-line flowfield calculation and a modified smeared rotational band (SRB) model for the radiation calculation. These two methods provide a computationally efficient alternative to a Navier-Stokes flowfield and line-by-line radiation calculation. The results of the VSL technique are shown to provide an excellent comparison with the Navier-Stokes results of previous studies. It is shown that a conventional SRB approach is inadequate for the partially optically-thick conditions present in the Huygens shock-layer around the peak heating trajectory points. A simple modification is proposed to the SRB model that improves its accuracy in these partially optically-thick conditions. This modified approach, labeled herein as SRBC, is compared throughout this study with a detailed line-by-line (LBL) calculation and is shown to compare within 5% in all cases. The SRBC method requires many orders-of-magnitude less computational time than the LBL method, which makes it ideal for coupling to the flowfield. The application of a collisional-radiative (CR) model for determining the population of the CN electronic states, which govern the radiation for Huygens entry, is discussed and applied. The non-local absorption term in the CR model is formulated in terms of an escape factor, which is then curve-fit with temperature. Although the curve-fit is an approximation, it is shown to compare well with the exact escape factor calculation, which requires a computationally intensive iteration procedure.

  10. Nongaussian distribution curve of heterophorias among children.

    PubMed

    Letourneau, J E; Giroux, R

    1991-02-01

    The purpose of this study was to measure the distribution curve of horizontal and vertical phorias among children. Kolmogorov-Smirnov goodness of fit tests showed that these distribution curves were not Gaussian among (N = 2048) 6- to 13-year-old children. The distribution curve of horizontal phoria at far and of vertical phorias at far and at near were leptokurtic; the distribution curve of horizontal phoria at near was platykurtic. No variation of the distribution curve of heterophorias with age was observed. Comparisons of any individual findings with the general distribution curve should take the nonGaussian distribution curve of heterophorias into account.

  11. Value of the cumulative sum test for the assessment of a learning curve: Application to the introduction of patient-specific instrumentation for total knee arthroplasty in an academic department.

    PubMed

    De Gori, Marco; Adamczewski, Benjamin; Jenny, Jean-Yves

    2017-06-01

    The purpose of the study was to use the cumulative summation (CUSUM) test to assess the learning curve during the introduction of a new surgical technique (patient-specific instrumentation) in total knee arthroplasty (TKA) in an academic department. The first 50TKAs operated on at an academic department using patient-specific templates (PSTs) were scheduled to enter the study. All patients had a preoperative computed tomography scan evaluation to plan bone resections. The PSTs were positioned intraoperatively according to the best-fit technique and their three-dimensional orientation was recorded by a navigation system. The position of the femur and tibia PST was compared to the planned position for four items for each component: coronal and sagittal orientation, medial and lateral height of resection. Items were summarized to obtain knee, femur and tibia PST scores, respectively. These scores were plotted according to chronological order and included in a CUSUM analysis. The tested hypothesis was that the PST process for TKA was immediately under control after its introduction. CUSUM test showed that positioning of the PST significantly differed from the target throughout the study. There was a significant difference between all scores and the maximal score. No case obtained the maximal score of eight points. The study was interrupted after 20 cases because of this negative evaluation. The CUSUM test is effective in monitoring the learning curve when introducing a new surgical procedure. Introducing PST for TKA in an academic department may be associated with a long-lasting learning curve. The study was registered on the clinical.gov website (Identifier NCT02429245). Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Phase-Based Adaptive Estimation of Magnitude-Squared Coherence Between Turbofan Internal Sensors and Far-Field Microphone Signals

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2015-01-01

    A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.

  13. Ratio of sequential chromatograms for quantitative analysis and peak deconvolution: Application to standard addition method and process monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.

    1990-08-01

    This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less

  14. Large-scale subject-specific cerebral arterial tree modeling using automated parametric mesh generation for blood flow simulation.

    PubMed

    Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A

    2017-12-01

    In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. UTM: Universal Transit Modeller

    NASA Astrophysics Data System (ADS)

    Deeg, Hans J.

    2014-12-01

    The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.

  16. SU-D-BRCD-06: Measurement of Elekta Electron Energy Spectra Using a Small Magnetic Spectrometer.

    PubMed

    Hogstrom, K; McLaughlin, D; Gibbons, J; Shikhaliev, P; Clarke, T; Henderson, A; Taylor, D; Shagin, P; Liang, E

    2012-06-01

    To demonstrate how a small magnetic spectrometer can measure the energy spectra of seven electron beams on an Elekta Infinity tuned to match beams on a previously commissioned machine. Energyspectra were determined from measurements of intensity profiles on 6″-long computed radiographic (CR) strips after deflecting a narrow incident beam using a small (28 lbs.), permanent magnetic spectrometer. CR plateexposures (<1cGy) required special beam reduction techniques and bremsstrahlung shielding. Curves of CR intensity (corrected for non- linearity and background) versus position were transformed into energy spectra using the transformation from position (x) on the CR plate to energy (E) based on the Lorentz force law. The effective magnetic field and its effective edge, parameters in the transformation, were obtained by fitting a plot of most probable incident energy (determined from practical range) to the peak position. The calibration curve (E vs. x) fit gave 0.423 Tesla for the effective magnetic field. Most resulting energy spectra were characterized by a single, asymmetric peak with peak position and FWHM increasing monotonically with beam energy. Only the 9-MeV spectrum was atypical, possibly indicating suboptimal beam tuning. These results compared well with energy spectra independently determined by adjusting each spectrum until the EGSnrc Monte Carlo calculated percent depth-dose curve agreed well with the corresponding measured curve. Results indicate that this spectrometer and methodology could be useful for measuring energy spectra of clinical electron beams at isocenter. Future work will (1) remove the small effect of the detector response function (due to pinhole size and incident angular spread) from the energy spectra, (2) extract the energy spectra exiting the accelerator from current results, (3) use the spectrometer to compare energy spectra of matched beams among our clinical sites, and (4) modify the spectrometer to utilize radiochromic film. © 2012 American Association of Physicists in Medicine.

  17. Transformation Model Choice in Nonlinear Regression Analysis of Fluorescence-based Serial Dilution Assays

    PubMed Central

    Fong, Youyi; Yu, Xuesong

    2016-01-01

    Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502

  18. Infrared spectroscopy as a screening technique for colitis

    NASA Astrophysics Data System (ADS)

    Titus, Jitto; Ghimire, Hemendra; Viennois, Emilie; Merlin, Didier; Perera, A. G. Unil

    2017-05-01

    There remains a great need for diagnosis of inflammatory bowel disease (IBD), for which the current technique, colonoscopy, is not cost-effective and presents a non-negligible risk for complications. Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) spectroscopy is a new screening technique to evaluate colitis. Comparing infrared spectra of sera to study the differences between them can prove challenging due to the complexity of its biological constituents giving rise to a plethora of vibrational modes. Overcoming these inherent infrared spectral analysis difficulties involving highly overlapping absorbance peaks and the analysis of the data by curve fitting to improve the resolution is discussed. The proposed technique uses colitic and normal wild type mice dried serum to obtain ATR/FTIR spectra to effectively differentiate colitic mice from normal mice. Using this method, Amide I group frequency (specifically, alpha helix to beta sheet ratio of the protein secondary structure) was identified as disease associated spectral signature in addition to the previously reported glucose and mannose signatures in sera of chronic and acute mice models of colitis. Hence, this technique will be able to identify changes in the sera due to various diseases.

  19. A Numerical Method for Calculating Stellar Occultation Light Curves from an Arbitrary Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Chamberlain, D. M.; Elliot, J. L.

    1997-01-01

    We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.

  20. A case study demonstration of the soil temperature extrema recovery rates after precipitation cooling at 10-cm soil depth

    NASA Technical Reports Server (NTRS)

    Welker, Jean Edward

    1991-01-01

    Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.

  1. An interactive graphics program to retrieve, display, compare, manipulate, curve fit, difference and cross plot wind tunnel data

    NASA Technical Reports Server (NTRS)

    Elliott, R. D.; Werner, N. M.; Baker, W. M.

    1975-01-01

    The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.

  2. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tao; Li, Cheng; Huang, Can

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  3. Research on Al-alloy sheet forming formability during warm/hot sheet hydroforming based on elliptical warm bulging test

    NASA Astrophysics Data System (ADS)

    Cai, Gaoshen; Wu, Chuanyu; Gao, Zepu; Lang, Lihui; Alexandrov, Sergei

    2018-05-01

    An elliptical warm/hot sheet bulging test under different temperatures and pressure rates was carried out to predict Al-alloy sheet forming limit during warm/hot sheet hydroforming. Using relevant formulas of ultimate strain to calculate and dispose experimental data, forming limit curves (FLCS) in tension-tension state of strain (TTSS) area are obtained. Combining with the basic experimental data obtained by uniaxial tensile test under the equivalent condition with bulging test, complete forming limit diagrams (FLDS) of Al-alloy are established. Using a quadratic polynomial curve fitting method, material constants of fitting function are calculated and a prediction model equation for sheet metal forming limit is established, by which the corresponding forming limit curves in TTSS area can be obtained. The bulging test and fitting results indicated that the sheet metal FLCS obtained were very accurate. Also, the model equation can be used to instruct warm/hot sheet bulging test.

  4. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE PAGES

    Ding, Tao; Li, Cheng; Huang, Can; ...

    2017-01-09

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  5. Fluorometric titration approach for calibration of quantity of binding site of purified monoclonal antibody recognizing epitope/hapten nonfluorescent at 340 nm.

    PubMed

    Yang, Xiaolan; Hu, Xiaolei; Xu, Bangtian; Wang, Xin; Qin, Jialin; He, Chenxiong; Xie, Yanling; Li, Yuanli; Liu, Lin; Liao, Fei

    2014-06-17

    A fluorometric titration approach was proposed for the calibration of the quantity of monoclonal antibody (mcAb) via the quench of fluorescence of tryptophan residues. It applied to purified mcAbs recognizing tryptophan-deficient epitopes, haptens nonfluorescent at 340 nm under the excitation at 280 nm, or fluorescent haptens bearing excitation valleys nearby 280 nm and excitation peaks nearby 340 nm to serve as Förster-resonance-energy-transfer (FRET) acceptors of tryptophan. Titration probes were epitopes/haptens themselves or conjugates of nonfluorescent haptens or tryptophan-deficient epitopes with FRET acceptors of tryptophan. Under the excitation at 280 nm, titration curves were recorded as fluorescence specific for the FRET acceptors or for mcAbs at 340 nm. To quantify the binding site of a mcAb, a universal model considering both static and dynamic quench by either type of probes was proposed for fitting to the titration curve. This was easy for fitting to fluorescence specific for the FRET acceptors but encountered nonconvergence for fitting to fluorescence of mcAbs at 340 nm. As a solution, (a) the maximum of the absolute values of first-order derivatives of a titration curve as fluorescence at 340 nm was estimated from the best-fit model for a probe level of zero, and (b) molar quantity of the binding site of the mcAb was estimated via consecutive fitting to the same titration curve by utilizing such a maximum as an approximate of the slope for linear response of fluorescence at 340 nm to quantities of the mcAb. This fluorometric titration approach was proved effective with one mcAb for six-histidine and another for penicillin G.

  6. Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications

    NASA Astrophysics Data System (ADS)

    Wang, K.; Lettenmaier, D. P.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.

  7. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  8. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.

    PubMed

    Rad, Kamiar Rahnama; Paninski, Liam

    2010-01-01

    Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.

  9. Graphic tracings of condylar paths and measurements of condylar angles.

    PubMed

    el-Gheriani, A S; Winstanley, R B

    1989-01-01

    A study was carried out to determine the accuracy of different methods of measuring condylar inclination from graphical recordings of condylar paths. Thirty subjects made protrusive mandibular movements while condylar inclination was recorded on a graph paper card. A mandibular facebow and intraoral central bearing plate facilitated the procedure. The first method proved to be too variable to be of value in measuring condylar angles. The spline curve fitting technique was shown to be accurate, but its use clinically may prove complex. The mathematical method was more practical and overcame the variability of the tangent method. Other conclusions regarding condylar inclination are outlined.

  10. High level continuity for coordinate generation with precise controls

    NASA Technical Reports Server (NTRS)

    Eiseman, P. R.

    1982-01-01

    Coordinate generation techniques with precise local controls have been derived and analyzed for continuity requirements up to both the first and second derivatives, and have been projected to higher level continuity requirements from the established pattern. The desired local control precision was obtained when a family of coordinate surfaces could be uniformly distributed without a consequent creation of flat spots on the coordinate curves transverse to the family. Relative to the uniform distribution, the family could be redistributed from an a priori distribution function or from a solution adaptive approach, both without distortion from the underlying transformation which may be independently chosen to fit a nontrivial geometry and topology.

  11. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    PubMed

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  12. Precise Time Delays from Strongly Gravitationally Lensed Type Ia Supernovae with Chromatically Microlensed Images

    NASA Astrophysics Data System (ADS)

    Goldstein, Daniel A.; Nugent, Peter E.; Kasen, Daniel N.; Collett, Thomas E.

    2018-03-01

    Time delays between the multiple images of strongly gravitationally lensed Type Ia supernovae (glSNe Ia) have the potential to deliver precise cosmological constraints, but the effects of microlensing on time delay extraction have not been studied in detail. Here we quantify the effect of microlensing on the glSN Ia yield of the Large Synoptic Survey Telescope (LSST) and the effect of microlensing on the precision and accuracy of time delays that can be extracted from LSST glSNe Ia. Microlensing has a negligible effect on the LSST glSN Ia yield, but it can be increased by a factor of ∼2 over previous predictions to 930 systems using a novel photometric identification technique based on spectral template fitting. Crucially, the microlensing of glSNe Ia is achromatic until three rest-frame weeks after the explosion, making the early-time color curves microlensing-insensitive time delay indicators. By fitting simulated flux and color observations of microlensed glSNe Ia with their underlying, unlensed spectral templates, we forecast the distribution of absolute time delay error due to microlensing for LSST, which is unbiased at the sub-percent level and peaked at 1% for color curve observations in the achromatic phase, while for light-curve observations it is comparable to state-of-the-art mass modeling uncertainties (4%). About 70% of LSST glSN Ia images should be discovered during the achromatic phase, indicating that microlensing time delay uncertainties can be minimized if prompt multicolor follow-up observations are obtained. Accounting for microlensing, the 1–2 day time delay on the recently discovered glSN Ia iPTF16geu can be measured to 40% precision, limiting its cosmological utility.

  13. Electrical property heterogeneity at transparent conductive oxide/organic semiconductor interfaces: mapping contact ohmicity using conducting-tip atomic force microscopy.

    PubMed

    MacDonald, Gordon A; Veneman, P Alexander; Placencia, Diogenes; Armstrong, Neal R

    2012-11-27

    We demonstrate mapping of electrical properties of heterojunctions of a molecular semiconductor (copper phthalocyanine, CuPc) and a transparent conducting oxide (indium-tin oxide, ITO), on 20-500 nm length scales, using a conductive-probe atomic force microscopy technique, scanning current spectroscopy (SCS). SCS maps are generated for CuPc/ITO heterojunctions as a function of ITO activation procedures and modification with variable chain length alkyl-phosphonic acids (PAs). We correlate differences in small length scale electrical properties with the performance of organic photovoltaic cells (OPVs) based on CuPc/C(60) heterojunctions, built on these same ITO substrates. SCS maps the "ohmicity" of ITO/CuPc heterojunctions, creating arrays of spatially resolved current-voltage (J-V) curves. Each J-V curve is fit with modified Mott-Gurney expressions, mapping a fitted exponent (γ), where deviations from γ = 2.0 suggest nonohmic behavior. ITO/CuPc/C(60)/BCP/Al OPVs built on nonactivated ITO show mainly nonohmic SCS maps and dark J-V curves with increased series resistance (R(S)), lowered fill-factors (FF), and diminished device performance, especially near the open-circuit voltage. Nearly optimal behavior is seen for OPVs built on oxygen-plasma-treated ITO contacts, which showed SCS maps comparable to heterojunctions of CuPc on clean Au. For ITO electrodes modified with PAs there is a strong correlation between PA chain length and the degree of ohmicity and uniformity of electrical response in ITO/CuPc heterojunctions. ITO electrodes modified with 6-8 carbon alkyl-PAs show uniform and nearly ohmic SCS maps, coupled with acceptable CuPc/C(60)OPV performance. ITO modified with C14 and C18 alkyl-PAs shows dramatic decreases in FF, increases in R(S), and greatly enhanced recombination losses.

  14. Precise Time Delays from Strongly Gravitationally Lensed Type Ia Supernovae with Chromatically Microlensed Images

    DOE PAGES

    Goldstein, Daniel A.; Nugent, Peter E.; Kasen, Daniel N.; ...

    2018-03-01

    Time delays between the multiple images of strongly gravitationally lensed Type Ia supernovae (glSNe Ia) have the potential to deliver precise cosmological constraints, but the effects of microlensing on time delay extraction have not been studied in detail. Here we quantify the effect of microlensing on the glSN Ia yield of the Large Synoptic Survey Telescope (LSST) and the effect of microlensing on the precision and accuracy of time delays that can be extracted from LSST glSNe Ia. Microlensing has a negligible effect on the LSST glSN Ia yield, but it can be increased by a factor of ~2 overmore » previous predictions to 930 systems using a novel photometric identification technique based on spectral template fitting. Crucially, the microlensing of glSNe Ia is achromatic until three rest-frame weeks after the explosion, making the early-time color curves microlensing-insensitive time delay indicators. By fitting simulated flux and color observations of microlensed glSNe Ia with their underlying, unlensed spectral templates, we forecast the distribution of absolute time delay error due to microlensing for LSST, which is unbiased at the sub-percent level and peaked at 1% for color curve observations in the achromatic phase, while for light-curve observations it is comparable to state-of-the-art mass modeling uncertainties (4%). About 70% of LSST glSN Ia images should be discovered during the achromatic phase, indicating that microlensing time delay uncertainties can be minimized if prompt multicolor follow-up observations are obtained. Lastly, accounting for microlensing, the 1-2 day time delay on the recently discovered glSN Ia iPTF16geu can be measured to 40% precision, limiting its cosmological utility.« less

  15. Precise Time Delays from Strongly Gravitationally Lensed Type Ia Supernovae with Chromatically Microlensed Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Daniel A.; Nugent, Peter E.; Kasen, Daniel N.

    Time delays between the multiple images of strongly gravitationally lensed Type Ia supernovae (glSNe Ia) have the potential to deliver precise cosmological constraints, but the effects of microlensing on time delay extraction have not been studied in detail. Here we quantify the effect of microlensing on the glSN Ia yield of the Large Synoptic Survey Telescope (LSST) and the effect of microlensing on the precision and accuracy of time delays that can be extracted from LSST glSNe Ia. Microlensing has a negligible effect on the LSST glSN Ia yield, but it can be increased by a factor of ~2 overmore » previous predictions to 930 systems using a novel photometric identification technique based on spectral template fitting. Crucially, the microlensing of glSNe Ia is achromatic until three rest-frame weeks after the explosion, making the early-time color curves microlensing-insensitive time delay indicators. By fitting simulated flux and color observations of microlensed glSNe Ia with their underlying, unlensed spectral templates, we forecast the distribution of absolute time delay error due to microlensing for LSST, which is unbiased at the sub-percent level and peaked at 1% for color curve observations in the achromatic phase, while for light-curve observations it is comparable to state-of-the-art mass modeling uncertainties (4%). About 70% of LSST glSN Ia images should be discovered during the achromatic phase, indicating that microlensing time delay uncertainties can be minimized if prompt multicolor follow-up observations are obtained. Lastly, accounting for microlensing, the 1-2 day time delay on the recently discovered glSN Ia iPTF16geu can be measured to 40% precision, limiting its cosmological utility.« less

  16. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  17. Temperature-dependent inotropic and lusitropic indices based on half-logistic time constants for four segmental phases in isovolumic left ventricular pressure-time curve in excised, cross-circulated canine heart.

    PubMed

    Mizuno, Ju; Mohri, Satoshi; Yokoyama, Takeshi; Otsuji, Mikiya; Arita, Hideko; Hanaoka, Kazuo

    2017-02-01

    Varying temperature affects cardiac systolic and diastolic function and the left ventricular (LV) pressure-time curve (PTC) waveform that includes information about LV inotropism and lusitropism. Our proposed half-logistic (h-L) time constants obtained by fitting using h-L functions for four segmental phases (Phases I-IV) in the isovolumic LV PTC are more useful indices for estimating LV inotropism and lusitropism during contraction and relaxation periods than the mono-exponential (m-E) time constants at normal temperature. In this study, we investigated whether the superiority of the goodness of h-L fits remained even at hypothermia and hyperthermia. Phases I-IV in the isovolumic LV PTCs in eight excised, cross-circulated canine hearts at 33, 36, and 38 °C were analyzed using h-L and m-E functions and the least-squares method. The h-L and m-E time constants for Phases I-IV significantly shortened with increasing temperature. Curve fitting using h-L functions was significantly better than that using m-E functions for Phases I-IV at all temperatures. Therefore, the superiority of the goodness of h-L fit vs. m-E fit remained at all temperatures. As LV inotropic and lusitropic indices, temperature-dependent h-L time constants could be more useful than m-E time constants for Phases I-IV.

  18. The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates

    ERIC Educational Resources Information Center

    Sivo, Stephen; Fan, Xitao; Witta, Lea

    2005-01-01

    The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…

  19. Dumb-bell-shaped equilibrium figures for fiducial contact-binary asteroids and EKBOs

    NASA Astrophysics Data System (ADS)

    Descamps, Pascal

    2015-01-01

    In this work, we investigate the equilibrium figures of a dumb-bell-shaped sequence with which we are still not well acquainted. Studies have shown that these elongated and nonconvex figures may realistically replace the classic “Roche binary approximation” for modeling putative peanut-shaped or contact binary asteroids. The best-fit dumb-bell shapes, combined with the known rotational period of the objects, provide estimates of the bulk density of these objects. This new class of mathematical figures has been successfully tested on the observed light curves of three noteworthy small bodies: main-belt Asteroid 216 Kleopatra, Trojan Asteroid 624 Hektor and Edgeworth-Kuiper-belt object 2001 QG298. Using the direct observations of Kleopatra and Hektor obtained with high spatial resolution techniques and fitting the size of the dumb-bell-shaped solutions, we derived new physical characteristics in terms of equivalent radius, 62.5 ± 5 km and 92 ± 5 km, respectively, and bulk density, 4.4 ± 0.4 g cm-3 and 2.43 ± 0.35 g cm-3, respectively. In particular, the growing inadequacy of the radar shape model for interpreting any type of observations of Kleopatra (light curves, AO images, stellar occultations) in a satisfactory manner suggests that Kleopatra is more likely to be a dumb-bell-shaped object than a “dog-bone.”

  20. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Murman, S. M.; Berger, M. J.

    2003-01-01

    This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.

  1. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  2. Conduction and rectification in NbO{sub x}- and NiO-based metal-insulator-metal diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osgood, Richard M., E-mail: richard.m.osgood.civ@mail.mil; Giardini, Stephen; Carlson, Joel

    2016-09-15

    Conduction and rectification in nanoantenna-coupled NbO{sub x}- and NiO-based metal-insulator-metal (MIM) diodes (“nanorectennas”) are studied by comparing new theoretical predictions with the measured response of nanorectenna arrays. A new quantum mechanical model is reported and agrees with measurements of current–voltage (I–V) curves, over 10 orders of magnitude in current density, from [NbO{sub x}(native)-Nb{sub 2}O{sub 5}]- and NiO-based samples with oxide thicknesses in the range of 5–36 nm. The model, which introduces new physics and features, including temperature, electron effective mass, and image potential effects using the pseudobarrier technique, improves upon widely used earlier models, calculates the MIM diode's I–V curve, andmore » predicts quantitatively the rectification responsivity of high frequency voltages generated in a coupled nanoantenna array by visible/near-infrared light. The model applies both at the higher frequencies, when high-energy photons are incident, and at lower frequencies, when the formula for classical rectification, involving derivatives of the I–V curve, may be used. The rectified low-frequency direct current is well-predicted in this work's model, but not by fitting the experimentally measured I–V curve with a polynomial or by using the older Simmons model (as shown herein). By fitting the measured I–V curves with our model, the barrier heights in Nb-(NbO{sub x}(native)-Nb{sub 2}O{sub 5})-Pt and Ni-NiO-Ti/Ag diodes are found to be 0.41/0.77 and 0.38/0.39 eV, respectively, similar to literature reports, but with effective mass much lower than the free space value. The NbO{sub x} (native)-Nb{sub 2}O{sub 5} dielectric properties improve, and the effective Pt-Nb{sub 2}O{sub 5} barrier height increases as the oxide thickness increases. An observation of direct current of ∼4 nA for normally incident, focused 514 nm continuous wave laser beams are reported, similar in magnitude to recent reports. This measured direct current is compared to the prediction for rectified direct current, given by the rectification responsivity, calculated from the I–V curve times input power.« less

  3. Effective Clipart Image Vectorization through Direct Optimization of Bezigons.

    PubMed

    Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian

    2016-02-01

    Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.

  4. Techniques to improve the accuracy of noise power spectrum measurements in digital x-ray imaging based on background trends removal.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2011-03-01

    Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.

  5. Parametric analysis of ATM solar array.

    NASA Technical Reports Server (NTRS)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  6. Enhancements of Bayesian Blocks; Application to Large Light Curve Databases

    NASA Technical Reports Server (NTRS)

    Scargle, Jeff

    2015-01-01

    Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).

  7. Structural, optical and magnetic studies of CuFe2O4, MgFe2O4 and ZnFe2O4 nanoparticles prepared by hydrothermal/solvothermal method

    NASA Astrophysics Data System (ADS)

    Kurian, Jessyamma; Mathew, M. Jacob

    2018-04-01

    In this paper we report the structural, optical and magnetic studies of three spinel ferrites namely CuFe2O4, MgFe2O4 and ZnFe2O4 prepared in an autoclave under the same physical conditions but with two different liquid medium and different surfactant. We use water as the medium and trisodium citrate as the surfactant for one method (Hydrothermal method) and ethylene glycol as the medium and poly ethylene glycol as the surfactant for the second method (solvothermal method). The phase identification and structural characterization are done using XRD and morphological studies are carried out by TEM. Cubical and porous spherical morphologies are obtained for hydrothermal and solvothermal process respectively without any impurity phase. The optical studies are carried out using FTIR and UV-Vis reflectance spectra. In order to elucidate the nonlinear optical behaviour of the prepared nanomaterial, open aperture z-scan technique is used. From the fitted z-scan curves nonlinear absorption coefficient and the saturation intensity are determined. The magnetic characterization of the samples is performed at room temperature using vibrating sample magnetometer measurements. The M-H curves obtained are fitted using theoretical equation and the different components of magnetization are determined. Nanoparticles with high saturation magnetization are obtained for MgFe2O4 and ZnFe2O4 prepared under solvothermal reaction. The magnetic hyperfine parameters and the cation distribution of the prepared materials are determined using room temperature Mössbauer spectroscopy. The fitted spectra reveal the difference in the magnetic hyperfine parameters owing to the change in size and morphology.

  8. The VMC survey - XXIII. Model fitting of light and radial velocity curves of Small Magellanic Cloud classical Cepheids

    NASA Astrophysics Data System (ADS)

    Marconi, M.; Molinaro, R.; Ripepi, V.; Cioni, M.-R. L.; Clementini, G.; Moretti, M. I.; Ragosta, F.; de Grijs, R.; Groenewegen, M. A. T.; Ivanov, V. D.

    2017-04-01

    We present the results of the χ2 minimization model fitting technique applied to optical and near-infrared photometric and radial velocity data for a sample of nine fundamental and three first overtone classical Cepheids in the Small Magellanic Cloud (SMC). The near-infrared photometry (JK filters) was obtained by the European Southern Observatory (ESO) public survey 'VISTA near-infrared Y, J, Ks survey of the Magellanic Clouds system' (VMC). For each pulsator, isoperiodic model sequences have been computed by adopting a non-linear convective hydrodynamical code in order to reproduce the multifilter light and (when available) radial velocity curve amplitudes and morphological details. The inferred individual distances provide an intrinsic mean value for the SMC distance modulus of 19.01 mag and a standard deviation of 0.08 mag, in agreement with the literature. Moreover, the intrinsic masses and luminosities of the best-fitting model show that all these pulsators are brighter than the canonical evolutionary mass-luminosity relation (MLR), suggesting a significant efficiency of core overshooting and/or mass-loss. Assuming that the inferred deviation from the canonical MLR is only due to mass-loss, we derive the expected distribution of percentage mass-loss as a function of both the pulsation period and the canonical stellar mass. Finally, a good agreement is found between the predicted mean radii and current period-radius (PR) relations in the SMC available in the literature. The results of this investigation support the predictive capabilities of the adopted theoretical scenario and pave the way for the application to other extensive data bases at various chemical compositions, including the VMC Large Magellanic Cloud pulsators and Galactic Cepheids with Gaia parallaxes.

  9. Function approximation and documentation of sampling data using artificial neural networks.

    PubMed

    Zhang, Wenjun; Barrion, Albert

    2006-11-01

    Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.

  10. [Fitting Monocurve and Bicurve (Soper-McGuire design) rigid gas-permeable contact lenses in keratoconus patients: a prospective randomized comparative clinical trial].

    PubMed

    Coral-Ghanem, Cleusa; Alves, Milton Ruiz

    2008-01-01

    To evaluate the clinical performance of Monocurve and Bicurve (Soper-McGuire design) rigid gas-permeable contact lens fitting in patients with keratoconus. A prospective and randomized comparative clinical trial was conducted with a minimum follow-up of six months in two groups of 63 patients. One group was fitted with Monocurve contact lenses and the other with Bicurve Soper-McGuire design. Study variables included fluoresceinic pattern of lens-to-cornea fitting relationship, location and morphology of the cone, presence and degree of punctate keratitis and other corneal surface alterations, topographic changes, visual acuity for distance corrected with contact lenses and survival analysis for remaining with the same contact lens design during the study. During the follow-up there was a decrease in the number of eyes with advanced and central cones fitted with Monocurve lenses, and an increase in those fitted with Soper-McGuire design. In the Monocurve group, a flattening of both the steepest and the flattest keratometric curve was observed. In the Soper-McGuire group, a steepening of the flattest keratometric curve and a flattening of the steepest keratometric curve were observed. There was a decrease in best-corrected visual acuity with contact lens in the Monocurve group. Survival analysis for the Monocurve lens was 60.32% and for the Soper-McGuire was 71.43% at a mean follow-up of six months. This study showed that due to the changes observed in corneal topography, the same contact lens design did not provide an ideal fitting for all patients during the follow-up period. The Soper-McGuire lenses had a better performance than the Monocurve lenses in advanced and central keratoconus.

  11. Design data for radars based on 13.9 GHz Skylab scattering coefficient measurements

    NASA Technical Reports Server (NTRS)

    Moore, R. K. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Measurements made at 13.9 GHz with the radar scatterometer on Skylab have been combined to produce median curves of the variation of scattering coefficient with angle of incidence out to 45 deg. Because of the large number of observations, and the large area averaged for each measured data point, these curves may be used as a new design base for radars. A reasonably good fit at larger angles is obtained using the theoretical expression based on an exponential height correlation function and also using Lambert's law. For angles under 10 deg, a different fit based on the exponential correlation function, and a fit based on geometric optics expressions are both reasonably valid.

  12. Interior Temperature Measurement Using Curved Mercury Capillary Sensor Based on X-ray Radiography

    NASA Astrophysics Data System (ADS)

    Chen, Shuyue; Jiang, Xing; Lu, Guirong

    2017-07-01

    A method was presented for measuring the interior temperature of objects using a curved mercury capillary sensor based on X-ray radiography. The sensor is composed of a mercury bubble, a capillary and a fixed support. X-ray digital radiography was employed to capture image of the mercury column in the capillary, and a temperature control system was designed for the sensor calibration. We adopted livewire algorithms and mathematical morphology to calculate the mercury length. A measurement model relating mercury length to temperature was established, and the measurement uncertainty associated with the mercury column length and the linear model fitted by least-square method were analyzed. To verify the system, the interior temperature measurement of an autoclave, which is totally closed, was taken from 29.53°C to 67.34°C. The experiment results show that the response of the system is approximately linear with an uncertainty of maximum 0.79°C. This technique provides a new approach to measure interior temperature of objects.

  13. Magneto-transport Properties Using Top-Gated Hall Bars of Epitaxial Heterostructures on Single-Crystal SiGe Nanomembranes

    NASA Astrophysics Data System (ADS)

    Jacobson, R. B.; Li, Yize; Foote, Ryan; Cui, Xiaorui; Savage, Donald; Sookchoo, Pornsatit; Eriksson, Mark; Lagally, Max

    2014-03-01

    A high-quality 2-dimensional electron gas (2DEG) is crucial for quantum electronics and spintronics. Grown heterostructures on SiGe nanomembranes (NMs) show promise to create these 2DEG structures because they have reduced strain inhomogeneities and mosaic tilt. We investigate charge transport properties of these SiGe NMs/heterostructures over a range of temperatures and compare them with results from heterostructures grown on compositionally graded SiGe substrates. Measurements are done by creating Hall bars with top gates on the samples. From the magneto-transport data, low-carrier-density mobility values are calculated. Initial results on the grown heterostructures give a typical curve for mobility versus carrier density, but extraction of the zero-carrier-density mobility is dependent on the curve-fitting technique. Sponsored by United States Department of Defense. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressly or implied, of the U.S. Government.

  14. Three-dimensional body scanning system for apparel mass-customization

    NASA Astrophysics Data System (ADS)

    Xu, Bugao; Huang, Yaxiong; Yu, Weiping; Chen, Tong

    2002-07-01

    Mass customization is a new manufacturing trend in which mass-market products (e.g., apparel) are quickly modified one at a time based on customers' needs. It is an effective competing strategy for maximizing customers' satisfaction and minimizing inventory costs. An automatic body measurement system is essential for apparel mass customization. This paper introduces the development of a body scanning system, body size extraction methods, and body modeling algorithms. The scanning system utilizes the multiline triangulation technique to rapidly acquire surface data on a body, and provides accurate body measurements, many of which are not available with conventional methods. Cubic B-spline curves are used to connect and smooth body curves. From the scanned data, a body form can be constructed using linear Coons surfaces. The body form can be used as a digital model of the body for 3-D garment design and for virtual try-on of a designed garment. This scanning system and its application software enable apparel manufacturers to provide custom design services to consumers seeking personal-fit garments.

  15. The magnetic field at the core-mantle boundary

    NASA Technical Reports Server (NTRS)

    Bloxham, J.; Gubbins, D.

    1985-01-01

    Models of the geomagnetic field are, in general, produced from a least-squares fit of the coefficients in a truncated spherical harmonic expansion to the available data. Downward continuation of such models to the core-mantle boundary (CMB) is an unstable process: the results are found to be critically dependent on the choice of truncation level. Modern techniques allow this fundamental difficulty to be circumvented. The method of stochastic inversion is applied to modeling the geomagnetic field. Prior information is introduced by requiring that the spectrum of spherical harmonic coefficients to fall-off in a particular manner which is consistent with the Ohmic heating in the core having a finite lower bound. This results in models with finite errors in the radial field at the CMB. Curves of zero radial field can then be determined and integrals of the radial field over patches on the CMB bounded by these null-flux curves calculated. With the assumption of negligible magnetic diffusion in the core; frozen-flux hypothesis, these integrals are time-invariant.

  16. Evaluation of protective shielding thickness for diagnostic radiology rooms: theory and computer simulation.

    PubMed

    Costa, Paulo R; Caldas, Linda V E

    2002-01-01

    This work presents the development and evaluation using modern techniques to calculate radiation protection barriers in clinical radiographic facilities. Our methodology uses realistic primary and scattered spectra. The primary spectra were computer simulated using a waveform generalization and a semiempirical model (the Tucker-Barnes-Chakraborty model). The scattered spectra were obtained from published data. An analytical function was used to produce attenuation curves from polychromatic radiation for specified kVp, waveform, and filtration. The results of this analytical function are given in ambient dose equivalent units. The attenuation curves were obtained by application of Archer's model to computer simulation data. The parameters for the best fit to the model using primary and secondary radiation data from different radiographic procedures were determined. They resulted in an optimized model for shielding calculation for any radiographic room. The shielding costs were about 50% lower than those calculated using the traditional method based on Report No. 49 of the National Council on Radiation Protection and Measurements.

  17. Complex Interaction Mechanisms between Dislocations and Point Defects Studied in Pure Aluminium by a Two-Wave Acoustic Coupling Technique

    NASA Astrophysics Data System (ADS)

    Bremnes, O.; Progin, O.; Gremaud, G.; Benoit, W.

    1997-04-01

    Ultrasonic experiments using a two-wave coupling technique were performed on 99.999% pure Al in order to study the interaction mechanisms occurring between dislocations and point defects. The coupling technique consists in measuring the attenuation of ultrasonic waves during low-frequency stress cycles (t). One obtains closed curves () called signatures whose shape and evolution are characteristic of the interaction mechanism controlling the low-frequency dislocation motion. The signatures observed were attributed to the interaction of the dislocations with extrinsic point defects. A new interpretation of the evolution of the signatures measured below 200 K with respect to temperature and stress frequency had to be established: they are linked to depinning of immobile point defects, whereas a thermally activated depinning mechanism does not fit the observations. The signatures measured between 200 and 370 K were interpreted as dragging and depinning of extrinsic point defects which are increasingly mobile with temperature.

  18. Analyzing Snowpack Metrics Over Large Spatial Extents Using Calibrated, Enhanced-Resolution Brightness Temperature Data and Long Short Term Memory Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Norris, W.; J Q Farmer, C.

    2017-12-01

    Snow water equivalence (SWE) is a difficult metric to measure accurately over large spatial extents; snow-tell sites are too localized, and traditional remotely sensed brightness temperature data is at too coarse of a resolution to capture variation. The new Calibrated Enhanced-Resolution Brightness Temperature (CETB) data from the National Snow and Ice Data Center (NSIDC) offers remotely sensed brightness temperature data at an enhanced resolution of 3.125 km versus the original 25 km, which allows for large spatial extents to be analyzed with reduced uncertainty compared to the 25km product. While the 25km brightness temperature data has proved useful in past research — one group found decreasing trends in SWE outweighed increasing trends three to one in North America; other researchers used the data to incorporate winter conditions, like snow cover, into ecological zoning criterion — with the new 3.125 km data, it is possible to derive more accurate metrics for SWE, since we have far more spatial variability in measurements. Even with higher resolution data, using the 37 - 19 GHz frequencies to estimate SWE distorts the data during times of melt onset and accumulation onset. Past researchers employed statistical splines, while other successful attempts utilized non-parametric curve fitting to smooth out spikes distorting metrics. In this work, rather than using legacy curve fitting techniques, a Long Short Term Memory (LSTM) Artificial Neural Network (ANN) was trained to perform curve fitting on the data. LSTM ANN have shown great promise in modeling time series data, and with almost 40 years of data available — 14,235 days — there is plenty of training data for the ANN. LSTM's are ideal for this type of time series analysis because they allow important trends to persist for long periods of time, but ignore short term fluctuations; since LSTM's have poor mid- to short-term memory, they are ideal for smoothing out the large spikes generated in the melt and accumulation onset seasons, while still capturing the overall trends in the data.

  19. Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies

    NASA Astrophysics Data System (ADS)

    Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.

    2017-12-01

    Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark matter models. We find that the pure baryonic model with fitted M/L ratios falls within the 1σ confidence level for 10 HSB and 2 LSB galaxies, at the price of growing the M/Ls on average by a factor of two, but the fits are inferior compared to the best-fitting dark matter model.

  20. Hyperspectral image classification by a variable interval spectral average and spectral curve matching combined algorithm

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der

    2010-08-01

    Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.

  1. Time-Frequency Analysis of the Dispersion of Lamb Modes

    NASA Technical Reports Server (NTRS)

    Prosser, W. H.; Seale, Michael D.; Smith, Barry T.

    1999-01-01

    Accurate knowledge of the velocity dispersion of Lamb modes is important for ultrasonic nondestructive evaluation methods used in detecting and locating flaws in thin plates and in determining their elastic stiffness coefficients. Lamb mode dispersion is also important in the acoustic emission technique for accurately triangulating the location of emissions in thin plates. In this research, the ability to characterize Lamb mode dispersion through a time-frequency analysis (the pseudo-Wigner-Ville distribution) was demonstrated. A major advantage of time-frequency methods is the ability to analyze acoustic signals containing multiple propagation modes, which overlap and superimpose in the time domain signal. By combining time-frequency analysis with a broadband acoustic excitation source, the dispersion of multiple Lamb modes over a wide frequency range can be determined from as little as a single measurement. In addition, the technique provides a direct measurement of the group velocity dispersion. The technique was first demonstrated in the analysis of a simulated waveform in an aluminum plate in which the Lamb mode dispersion was well known. Portions of the dispersion curves of the AO, A I , So, and S2 Lamb modes were obtained from this one waveform. The technique was also applied for the analysis of experimental waveforms from a unidirectional graphite/epoxy composite plate. Measurements were made both along and perpendicular to the fiber direction. In this case, the signals contained only the lowest order symmetric and antisymmetric modes. A least squares fit of the results from several source to detector distances was used. Theoretical dispersion curves were calculated and are shown to be in good agreement with experimental results.

  2. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  3. Multi-Filter Photometric Analysis of Three β Lyrae-type Eclipsing Binary Stars

    NASA Astrophysics Data System (ADS)

    Gardner, T.; Hahs, G.; Gokhale, V.

    2015-12-01

    We present light curve analysis of three variable stars, ASAS J105855+1722.2, NSVS 5066754, and NSVS 9091101. These objects are selected from a list of β- Lyrae candidates published by Hoffman et al. (2008). Light curves are generated using data collected at the the 31-inch NURO telescope at the Lowell Observatory in Flagstaff, Arizona in three filters: Bessell B, V, and R. Additional observations were made using the 14-inch Meade telescope at the Truman State Observatory in Kirksville, Missouri using Baader R, G, and B filters. In this paper, we present the light curves for these three objects and generate a truncated eight-term Fourier fit to these light curves. We use the Fourier coefficients from this fit to confirm ASAS J105855+1722.2 and NSVS 5066754 as β Lyrae type systems, and NSVS 9091101 to possibly be a RR Lyrae-type system. We measure the O'Connell effect observed in two of these systems (ASAS J105855+1722.2 and NSVS 5066754), and quantify this effect by calculating the "Light Curve Asymmetry" (LCA) and the "O'Connell Effect Ratio" (OER).

  4. Are conventional statistical techniques exhaustive for defining metal background concentrations in harbour sediments? A case study: The Coastal Area of Bari (Southeast Italy).

    PubMed

    Mali, Matilda; Dell'Anna, Maria Michela; Mastrorilli, Piero; Damiani, Leonardo; Ungaro, Nicola; Belviso, Claudia; Fiore, Saverio

    2015-11-01

    Sediment contamination by metals poses significant risks to coastal ecosystems and is considered to be problematic for dredging operations. The determination of the background values of metal and metalloid distribution based on site-specific variability is fundamental in assessing pollution levels in harbour sediments. The novelty of the present work consists of addressing the scope and limitation of analysing port sediments through the use of conventional statistical techniques (such as: linear regression analysis, construction of cumulative frequency curves and the iterative 2σ technique), that are commonly employed for assessing Regional Geochemical Background (RGB) values in coastal sediments. This study ascertained that although the tout court use of such techniques in determining the RGB values in harbour sediments seems appropriate (the chemical-physical parameters of port sediments fit well with statistical equations), it should nevertheless be avoided because it may be misleading and can mask key aspects of the study area that can only be revealed by further investigations, such as mineralogical and multivariate statistical analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  6. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  7. Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly

    PubMed Central

    Yao, Jian; Levine, Judah; Weiss, Marc

    2015-01-01

    The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is “day boundary discontinuity,” which has been studied extensively and can be solved by multiple methods [1–8]. The other category of discontinuity, called “anomaly boundary discontinuity (anomaly-BD),” comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as “jamming”, can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer. PMID:26958451

  8. Investigation on phase transitions of 1-decylammonium hydrochloride as the potential thermal energy storage material

    NASA Astrophysics Data System (ADS)

    Dan, Wen-Yan; Di, You-Ying; He, Dong-Hua; Liu, Yu-Pu

    2011-02-01

    1-Decylammonium hydrochloride was synthesized by the method of liquid phase synthesis. Chemical analysis, elemental analysis, and X-ray single crystal diffraction techniques were applied to characterize its composition and structure. Low-temperature heat capacities of the compounds were measured with a precision automated adiabatic calorimeter over the temperature range from 78 to 380 K. Three solid-solid phase transitions have been observed at the peak temperatures of 307.52 ± 0.13, 325.02 ± 0.19, and 327.26 ± 0.07 K. The molar enthalpies and entropies of three phase transitions were determined based on the analysis of heat capacity curves. Experimental molar heat capacities were fitted to two polynomial equations of the heat capacities as a function of temperature by least square method. Smoothed heat capacities and thermodynamic functions of the compound relative to the standard reference temperature 298.15 K were calculated and tabulated at intervals of 5 K based on the fitted polynomials.

  9. Possible Transit Timing Variations of the TrES-3 Planetary System

    NASA Astrophysics Data System (ADS)

    Jiang, Ing-Guey; Yeh, Li-Chin; Thakur, Parijat; Wu, Yu-Ting; Chien, Ping; Lin, Yi-Ling; Chen, Hong-Yu; Hu, Juei-Hwa; Sun, Zhao; Ji, Jianghui

    2013-03-01

    Five newly observed transit light curves of the TrES-3 planetary system are presented. Together with other light-curve data from the literature, 23 transit light curves in total, which cover an overall timescale of 911 epochs, have been analyzed through a standard procedure. From these observational data, the system's orbital parameters are determined and possible transit timing variations (TTVs) are investigated. Given that a null TTV produces a fit with reduced χ2 = 1.52, our results agree with previous work, that TTVs might not exist in these data. However, a one-frequency oscillating TTV model, giving a fit with a reduced χ2 = 0.93, does possess a statistically higher probability. It is thus concluded that future observations and dynamical simulations for this planetary system will be very important.

  10. Evaluation of the swelling behaviour of iota-carrageenan in monolithic matrix tablets.

    PubMed

    Kelemen, András; Buchholcz, Gyula; Sovány, Tamás; Pintye-Hódi, Klára

    2015-08-10

    The swelling properties of monolithic matrix tablets containing iota-carrageenan were studied at different pH values, with measurements of the swelling force and characterization of the profile of the swelling curve. The swelling force meter was linked to a PC by an RS232 cable and the measured data were evaluated with self-developed software. The monitor displayed the swelling force vs. time curve with the important parameters, which could be fitted with an Analysis menu. In the case of iota-carrageenan matrix tablets, it was concluded that the pH and the pressure did not influence the swelling process, and the first section of the swelling curve could be fitted by the Korsmeyer-Peppas equation. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Application of the method of temporal moments to interpret solute transport with sorption and degradation

    NASA Astrophysics Data System (ADS)

    Pang, Liping; Goltz, Mark; Close, Murray

    2003-01-01

    In this note, we applied the temporal moment solutions of [Das and Kluitenberg, 1996. Soil Sci. Am. J. 60, 1724] for one-dimensional advective-dispersive solute transport with linear equilibrium sorption and first-order degradation for time pulse sources to analyse soil column experimental data. Unlike most other moment solutions, these solutions consider the interplay of degradation and sorption. This permits estimation of a first-order degradation rate constant using the zeroth moment of column breakthrough data, as well as estimation of the retardation factor or sorption distribution coefficient of a degrading solute using the first moment. The method of temporal moment (MOM) formulae was applied to analyse breakthrough data from a laboratory column study of atrazine, hexazinone and rhodamine WT transport in volcanic pumice sand, as well as experimental data from the literature. Transport and degradation parameters obtained using the MOM were compared to parameters obtained by fitting breakthrough data from an advective-dispersive transport model with equilibrium sorption and first-order degradation, using the nonlinear least-square curve-fitting program CXTFIT. The results derived from using the literature data were also compared with estimates reported in the literature using different equilibrium models. The good agreement suggests that the MOM could provide an additional useful means of parameter estimation for transport involving equilibrium sorption and first-order degradation. We found that the MOM fitted breakthrough curves with tailing better than curve fitting. However, the MOM analysis requires complete breakthrough curves and relatively frequent data collection to ensure the accuracy of the moments obtained from the breakthrough data.

  12. Fitting the post-keratoplasty cornea with hydrogel lenses.

    PubMed

    Katsoulos, Costas; Nick, Vasileiou; Lefteris, Karageorgiadis; Theodore, Mousafeiropoulos

    2009-02-01

    We report two cases who have undergone penetrating keratoplasty (three eyes total), and who were fitted with hydrogel lenses. In the first case, a 28-year-old male presented with an interest in contact lens fitting. He had undergone corneal transplantation in both eyes, about 5 years ago. After topographies and trial fitting were performed, it was decided to be fitted with reverse geometry hydrogel lenses, due to the globular geometry of the cornea, the resultant instability of RGPs, and personal preference. In the second case, a 26-year-old female who had also penetrating keratoplasty was fitted with a hydrogel toric lens of high cylinder in the right eye. The final hydrogel lenses for the first subject incorporated a custom tricurve design, in which the second curve was steeper than the base curve and the third curve flatter than the second but still steeper than the first. Visual acuity was 6/7.5 RE and a mediocre 6/15 LE (OU 6/7.5). The second subject achieved 6/4.5 acuity RE with the high cylinder hydrogel toric lens. In corneas exhibiting extreme protrusion, such as keratoglobus and some cases after penetrating keratoplasty, curvatures are so extreme and the cornea so globular leading to specific fitting options: sclerals, small diameter RGPs and reverse geometry hydrogel lenses, in order to improve lens and optical stability. In selected cases such as the above, large diameter inverse geometry RGP may be fitted only if the eyelid shape and tension permits so. The first case demonstrates that the option of hydrogel lenses is viable when the patient has no interest in RGPs and in certain cases can improve vision to satisfactory levels. In other cases, graft toricity might be so high that the practitioner will need to employ hydrogel torics with large amounts of cylinder in order to correct vision. In such cases, the patient should be closely monitored in order to avoid complications from hypoxia.

  13. Testing feedback-modified dark matter haloes with galaxy rotation curves: estimation of halo parameters and consistency with ΛCDM scaling relations

    NASA Astrophysics Data System (ADS)

    Katz, Harley; Lelli, Federico; McGaugh, Stacy S.; Di Cintio, Arianna; Brook, Chris B.; Schombert, James M.

    2017-04-01

    Cosmological N-body simulations predict dark matter (DM) haloes with steep central cusps (e.g. NFW). This contradicts observations of gas kinematics in low-mass galaxies that imply the existence of shallow DM cores. Baryonic processes such as adiabatic contraction and gas outflows can, in principle, alter the initial DM density profile, yet their relative contributions to the halo transformation remain uncertain. Recent high-resolution, cosmological hydrodynamic simulations by Di Cintio et al. (DC14) predict that inner density profiles depend systematically on the ratio of stellar-to-DM mass (M*/Mhalo). Using a Markov Chain Monte Carlo approach, we test the NFW and the M*/Mhalo-dependent DC14 halo models against a sample of 147 galaxy rotation curves from the new Spitzer Photometry and Accurate Rotation Curves data set. These galaxies all have extended H I rotation curves from radio interferometry as well as accurate stellar-mass-density profiles from near-infrared photometry. The DC14 halo profile provides markedly better fits to the data compared to the NFW profile. Unlike NFW, the DC14 halo parameters found in our rotation-curve fits naturally fall within two standard deviations of the mass-concentration relation predicted by Λ cold dark matter (ΛCDM) and the stellar mass-halo mass relation inferred from abundance matching with few outliers. Halo profiles modified by baryonic processes are therefore more consistent with expectations from ΛCDM cosmology and provide better fits to galaxy rotation curves across a wide range of galaxy properties than do halo models that neglect baryonic physics. Our results offer a solution to the decade long cusp-core discrepancy.

  14. The Type Ia Supernova Color-Magnitude Relation and Host Galaxy Dust: A Simple Hierarchical Bayesian Model

    NASA Astrophysics Data System (ADS)

    Mandel, Kaisey S.; Scolnic, Daniel M.; Shariff, Hikmatali; Foley, Ryan J.; Kirshner, Robert P.

    2017-06-01

    Conventional Type Ia supernova (SN Ia) cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (M B versus B - V) slope {β }{int} differs from the host galaxy dust law R B , this convolution results in a specific curve of mean extinguished absolute magnitude versus apparent color. The derivative of this curve smoothly transitions from {β }{int} in the blue tail to R B in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope {β }{app} between {β }{int} and R B . We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a data set of SALT2 optical light curve fits of 248 nearby SNe Ia at z< 0.10. The conventional linear fit gives {β }{app}≈ 3. Our model finds {β }{int}=2.3+/- 0.3 and a distinct dust law of {R}B=3.8+/- 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ˜0.10 mag in the tails of the apparent color distribution. Finally, we extend our model to examine the SN Ia luminosity-host mass dependence in terms of intrinsic and dust components.

  15. Dose rate prediction methodology for remote handled transuranic waste workers at the waste isolation pilot plant.

    PubMed

    Hayes, Robert

    2002-10-01

    An approach is described for estimating future dose rates to Waste Isolation Pilot Plant workers processing remote handled transuranic waste. The waste streams will come from the entire U.S. Department of Energy complex and can take on virtually any form found from the processing sequences for defense-related production, radiochemistry, activation and related work. For this reason, the average waste matrix from all generator sites is used to estimate the average radiation fields over the facility lifetime. Innovative new techniques were applied to estimate expected radiation fields. Non-linear curve fitting techniques were used to predict exposure rate profiles from cylindrical sources using closed form equations for lines and disks. This information becomes the basis for Safety Analysis Report dose rate estimates and for present and future ALARA design reviews when attempts are made to reduce worker doses.

  16. Using flowmeter pulse tests to define hydraulic connections in the subsurface: A fractured shale example

    USGS Publications Warehouse

    Williams, J.H.; Paillet, Frederick L.

    2002-01-01

    Cross-borehole flowmeter pulse tests define subsurface connections between discrete fractures using short stress periods to monitor the propagation of the pulse through the flow system. This technique is an improvement over other cross-borehole techniques because measurements can be made in open boreholes without packers or previous identification of water-producing intervals. The method is based on the concept of monitoring the propagation of pulses rather than steady flow through the fracture network. In this method, a hydraulic stress is applied to a borehole connected to a single, permeable fracture, and the distribution of flow induced by that stress monitored in adjacent boreholes. The transient flow responses are compared to type curves computed for several different types of fracture connections. The shape of the transient flow response indicates the type of fracture connection, and the fit of the data to the type curve yields an estimate of its transmissivity and storage coefficient. The flowmeter pulse test technique was applied in fractured shale at a volatile-organic contaminant plume in Watervliet, New York. Flowmeter and other geophysical logs were used to identify permeable fractures in eight boreholes in and near the contaminant plume using single-borehole flow measurements. Flowmeter cross-hole pulse tests were used to identify connections between fractures detected in the boreholes. The results indicated a permeable fracture network connecting many of the individual boreholes, and demonstrated the presence of an ambient upward hydraulic-head gradient throughout the site.

  17. Peripheral absolute threshold spectral sensitivity in retinitis pigmentosa.

    PubMed Central

    Massof, R W; Johnson, M A; Finkelstein, D

    1981-01-01

    Dark-adapted spectral sensitivities were measured in the peripheral retinas of 38 patients diagnosed as having typical retinitis pigmentosa (RP) and in 3 normal volunteers. The patients included those having autosomal dominant and autosomal recessive inheritance patterns. Results were analysed by comparisons with the CIE standard scotopic spectral visibility function and with Judd's modification of the photopic spectral visibility function, with consideration of contributions from changes in spectral transmission of preretinal media. The data show 3 general patterns. One group of patients had absolute threshold spectral sensitivities that were fit by Judd's photopic visibility curve. Absolute threshold spectral sensitivities for a second group of patients were fit by a normal scotopic spectral visibility curve. The third group of patients had absolute threshold spectral sensitivities that were fit by a combination of scotopic and photopic spectral visibility curves. The autosomal dominant and autosomal recessive modes of inheritance were represented in each group of patients. These data indicate that RP patients have normal rod and/or cone spectral sensitivities, and support the subclassification of patients described previously by Massof and Finkelstein. PMID:7459312

  18. Guidelines for application of learning/cost improvement curves

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1975-01-01

    The differences between the terms learning curve and improvement curve are noted, as well as the differences between the Wright system and the Crawford system. Learning curve computational techniques were reviewed along with a method to arrive at a composite learning curve for a system given detail curves either by the functional techniques classification or simply categorized by subsystem. Techniques are discussed for determination of the theoretical first unit (TFU) cost using several of the currently accepted methods. Sometimes TFU cost is referred to as simply number one cost. A tabular presentation of the various learning curve slope values is given. A discussion of the various trends in the application of learning/improvement curves and an outlook for the future are presented.

  19. Aleatory Uncertainty and Scale Effects in Computational Damage Models for Failure and Fragmentation

    DTIC Science & Technology

    2014-09-01

    larger specimens, small specimens have, on average, higher strengths. Equivalently, because curves for small specimens fall below those of larger...the material strength associated with each realization parameter R in Equation (7), and strength distribution curves associated with multiple...effects in brittle media [58], which applies micromorphological dimensional analysis to obtain a universal curve which closely fits rate-dependent

  20. Proceedings of the Annual Symposium on Frequency Control (39th) Held in Philadelphia, Pennsylvania on 29-31 May 1985

    DTIC Science & Technology

    1985-05-01

    distribution, was evaluation of phase shift through best fit of assumed to be the beam response to the microwave theoretical curves and experimental...vibration sidebands o Acceleration as shown in the lower calculated curve . o High-Temperature Exposure o Thermal Vacuum Two of the curves show actual phase ...conclude that the method to measure the phase noise with spectrum estimation is workable, and it has no principle limitation. From the curve it has been

  1. The structure of binding curves and practical identifiability of equilibrium ligand-binding parameters

    PubMed Central

    Middendorf, Thomas R.

    2017-01-01

    A critical but often overlooked question in the study of ligands binding to proteins is whether the parameters obtained from analyzing binding data are practically identifiable (PI), i.e., whether the estimates obtained from fitting models to noisy data are accurate and unique. Here we report a general approach to assess and understand binding parameter identifiability, which provides a toolkit to assist experimentalists in the design of binding studies and in the analysis of binding data. The partial fraction (PF) expansion technique is used to decompose binding curves for proteins with n ligand-binding sites exactly and uniquely into n components, each of which has the form of a one-site binding curve. The association constants of the PF component curves, being the roots of an n-th order polynomial, may be real or complex. We demonstrate a fundamental connection between binding parameter identifiability and the nature of these one-site association constants: all binding parameters are identifiable if the constants are all real and distinct; otherwise, at least some of the parameters are not identifiable. The theory is used to construct identifiability maps from which the practical identifiability of binding parameters for any two-, three-, or four-site binding curve can be assessed. Instructions for extending the method to generate identifiability maps for proteins with more than four binding sites are also given. Further analysis of the identifiability maps leads to the simple rule that the maximum number of structurally identifiable binding parameters (shown in the previous paper to be equal to n) will also be PI only if the binding curve line shape contains n resolved components. PMID:27993951

  2. Predictive modeling of transient storage and nutrient uptake: Implications for stream restoration

    USGS Publications Warehouse

    O'Connor, Ben L.; Hondzo, Miki; Harvey, Judson W.

    2010-01-01

    This study examined two key aspects of reactive transport modeling for stream restoration purposes: the accuracy of the nutrient spiraling and transient storage models for quantifying reach-scale nutrient uptake, and the ability to quantify transport parameters using measurements and scaling techniques in order to improve upon traditional conservative tracer fitting methods. Nitrate (NO3–) uptake rates inferred using the nutrient spiraling model underestimated the total NO3– mass loss by 82%, which was attributed to the exclusion of dispersion and transient storage. The transient storage model was more accurate with respect to the NO3– mass loss (±20%) and also demonstrated that uptake in the main channel was more significant than in storage zones. Conservative tracer fitting was unable to produce transport parameter estimates for a riffle-pool transition of the study reach, while forward modeling of solute transport using measured/scaled transport parameters matched conservative tracer breakthrough curves for all reaches. Additionally, solute exchange between the main channel and embayment surface storage zones was quantified using first-order theory. These results demonstrate that it is vital to account for transient storage in quantifying nutrient uptake, and the continued development of measurement/scaling techniques is needed for reactive transport modeling of streams with complex hydraulic and geomorphic conditions.

  3. Estimating zero strain states of very soft tissue under gravity loading using digital image correlation⋆,⋆⋆,★

    PubMed Central

    Gao, Zhan; Desai, Jaydev P.

    2009-01-01

    This paper presents several experimental techniques and concepts in the process of measuring mechanical properties of very soft tissue in an ex vivo tensile test. Gravitational body force on very soft tissue causes pre-compression and results in a non-uniform initial deformation. The global Digital Image Correlation technique is used to measure the full field deformation behavior of liver tissue in uniaxial tension testing. A maximum stretching band is observed in the incremental strain field when a region of tissue passes from compression and enters a state of tension. A new method for estimating the zero strain state is proposed: the zero strain position is close to, but ahead of the position of the maximum stretching band, or in other words, the tangent of a nominal stress-stretch curve reaches minimum at λ ≳ 1. The approach, to identify zero strain by using maximum incremental strain, can be implemented in other types of image-based soft tissue analysis. The experimental results of ten samples from seven porcine livers are presented and material parameters for the Ogden model fit are obtained. The finite element simulation based on the fitted model confirms the effect of gravity on the deformation of very soft tissue and validates our approach. PMID:20015676

  4. Least Squares Procedures.

    ERIC Educational Resources Information Center

    Hester, Yvette

    Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…

  5. Mathcad in the Chemistry Curriculum Symbolic Software in the Chemistry Curriculum

    NASA Astrophysics Data System (ADS)

    Zielinski, Theresa Julia

    2000-05-01

    Physical chemistry is such a broad discipline that the topics we expect average students to complete in two semesters usually exceed their ability for meaningful learning. Consequently, the number and kind of topics and the efficiency with which students can learn them are important concerns. What topics are essential and what can we do to provide efficient and effective access to those topics? How do we accommodate the fact that students come to upper-division chemistry courses with a variety of nonuniformly distributed skills, a bit of calculus, and some physics studied one or more years before physical chemistry? The critical balance between depth and breadth of learning in courses and curricula may be achieved through appropriate use of technology and especially through the use of symbolic mathematics software. Software programs such as Mathcad, Mathematica, and Maple, however, have learning curves that diminish their effectiveness for novices. There are several ways to address the learning curve conundrum. First, basic instruction in the software provided during laboratory sessions should be followed by requiring laboratory reports that use the software. Second, one should assign weekly homework that requires the software and builds student skills within the discipline and with the software. Third, a complementary method, supported by this column, is to provide students with Mathcad worksheets or templates that focus on one set of related concepts and incorporate a variety of features of the software that they are to use to learn chemistry. In this column we focus on two significant topics for young chemists. The first is curve-fitting and the statistical analysis of the fitting parameters. The second is the analysis of the rotation/vibration spectrum of a diatomic molecule, HCl. A broad spectrum of Mathcad documents exists for teaching chemistry. One collection of 50 documents can be found at http://www.monmouth.edu/~tzielins/mathcad/Lists/index.htm. Another collection of peer-reviewed documents is developing through this column at the JCE Internet Web site, http://jchemed.chem.wisc.edu/JCEWWW/Features/ McadInChem/index.html. With this column we add three peer-reviewed and tested Mathcad documents to the JCE site. In Linear Least-Squares Regression, Sidney H. Young and Andrzej Wierzbicki demonstrate various implicit and explicit methods for determining the slope and intercept of the regression line for experimental data. The document shows how to determine the standard deviation for the slope, the intercept, and the standard deviation of the overall fit. Students are next given the opportunity to examine the confidence level for the fit through the Student's t-test. Examination of the residuals of the fit leads students to explore the possibility of rejecting points in a set of data. The document concludes with a discussion of and practice with adding a quadratic term to create a polynomial fit to a set of data and how to determine if the quadratic term is statistically significant. There is full documentation of the various steps used throughout the exposition of the statistical concepts. Although the statistical methods presented in this worksheet are generally accessible to average physical chemistry students, an instructor would be needed to explain the finer points of the matrix methods used in some sections of the worksheet. The worksheet is accompanied by a set of data for students to use to practice the techniques presented. It would be worthwhile for students to spend one or two laboratory periods learning to use the concepts presented and then to apply them to experimental data they have collected for themselves. Any linear or linearizable data set would be appropriate for use with this Mathcad worksheet. Alternatively, instructors may select sections of the document suited to the skill level of their students and the laboratory tasks at hand. In a second Mathcad document, Non-Linear Least-Squares Regression, Young and Wierzbicki introduce the basic concepts of nonlinear curve-fitting and develop the techniques needed to fit a variety of mathematical functions to experimental data. This approach is especially important when mathematical models for chemical processes cannot be linearized. In Mathcad the Levenberg-Marquardt algorithm is used to determine the best fitting parameters for a particular mathematical model. As in linear least-squares, the goal of the fitting process is to find the values for the fitting parameters that minimize the sum of the squares of the deviations between the data and the mathematical model. Students are asked to determine the fitting parameters, use the Hessian matrix to compute the standard deviation of the fitting parameters, test for the significance of the parameters using Student's t-test, use residual analysis to test for data points to remove, and repeat the calculations for another set of data. The nonlinear least-squares procedure follows closely on the pattern set up for linear least-squares by the same authors (see above). If students master the linear least-squares worksheet content they will be able to master the nonlinear least-squares technique (see also refs 1, 2). In the third document, The Analysis of the Vibrational Spectrum of a Linear Molecule by Richard Schwenz, William Polik, and Sidney Young, the authors build on the concepts presented in the curve fitting worksheets described above. This vibrational analysis document, which supports a classic experiment performed in the physical chemistry laboratory, shows how a Mathcad worksheet can increase the efficiency by which a set of complicated manipulations for data reduction can be made more accessible for students. The increase in efficiency frees up time for students to develop a fuller understanding of the physical chemistry concepts important to the interpretation of spectra and understanding of bond vibrations in general. The analysis of the vibration/rotation spectrum for a linear molecule worksheet builds on the rich literature for this topic (3). Before analyzing their own spectral data, students practice and learn the concepts and methods of the HCl spectral analysis by using the fundamental and first harmonic vibrational frequencies provided by the authors. This approach has a fundamental pedagogical advantage. Most explanations in laboratory texts are very concise and lack mathematical details required by average students. This Mathcad worksheet acts as a tutor; it guides students through the essential concepts for data reduction and lets them focus on learning important spectroscopic concepts. The Mathcad worksheet is amply annotated. Students who have moderate skill with the software and have learned about regression analysis from the curve-fitting worksheets described in this column will be able to complete and understand their analysis of the IR spectrum of HCl. The three Mathcad worksheets described here stretch the physical chemistry curriculum by presenting important topics in forms that students can use with only moderate Mathcad skills. The documents facilitate learning by giving students opportunities to interact with the material in meaningful ways in addition to using the documents as sources of techniques for building their own data-reduction worksheets. However, working through these Mathcad worksheets is not a trivial task for the average student. Support needs to be provided by the instructor to ease students through more advanced mathematical and Mathcad processes. These worksheets raise the question of how much we can ask diligent students to do in one course and how much time they need to spend to master the essential concepts of that course. The Mathcad documents and associated PDF versions are available at the JCE Internet WWW site. The Mathcad documents require Mathcad version 6.0 or higher and the PDF files require Adobe Acrobat. Every effort has been made to make the documents fully compatible across the various Mathcad versions. Users may need to refer to Mathcad manuals for functions that vary with the Mathcad version number. Literature Cited 1. Bevington, P. R. Data Reduction and Error Analysis for the Physical Sciences; McGraw-Hill: New York, 1969. 2. Zielinski, T. J.; Allendoerfer, R. D. J. Chem. Educ. 1997, 74, 1001. 3. Schwenz, R. W.; Polik, W. F. J. Chem. Educ. 1999, 76, 1302.

  6. Modeling Phase-Aligned Gamma-Ray and Radio Millisecond Pulsar Light Curves

    NASA Technical Reports Server (NTRS)

    Venter, C.; Johnson, T.; Harding, A.

    2012-01-01

    Since the discovery of the first eight gamma-ray millisecond pulsars (MSPs) by the Fermi Large Area Telescope, this population has been steadily expanding. Four of the more recent detections, PSR J00340534, PSR J1939+2134 (B1937+21; the first MSP ever discovered), PSR J1959+2048 (B1957+20; the first discovery of a black widow system), and PSR J2214+3000, exhibit a phenomenon not present in the original discoveries: nearly phase-aligned radio and gamma-ray light curves (LCs). To account for the phase alignment, we explore models where both the radio and gamma-ray emission originate either in the outer magnetosphere near the light cylinder or near the polar caps. Using a Markov Chain Monte Carlo technique to search for best-fit model parameters, we obtain reasonable LC fits for the first three of these MSPs in the context of altitude-limited outer gap (alOG) and two-pole caustic (alTPC) geometries (for both gamma-ray and radio emission). These models differ from the standard outer gap (OG)/two-pole caustic (TPC) models in two respects: the radio emission originates in caustics at relatively high altitudes compared to the usual conal radio beams, and we allow both the minimum and maximum altitudes of the gamma-ray and radio emission regions to vary within a limited range (excluding the minimum gamma-ray altitude of the alTPC model, which is kept constant at the stellar radius, and that of the alOG model, which is set to the position-dependent null charge surface altitude). Alternatively, phase-aligned solutions also exist for emission originating near the stellar surface in a slot gap scenario (low-altitude slot gap (laSG) models). We find that the alTPC models provide slightly better LC fits than the alOG models, and both of these give better fits than the laSG models (for the limited range of parameters considered in the case of the laSG models). Thus, our fits imply that the phase-aligned LCs are likely of caustic origin, produced in the outer magnetosphere, and that the radio emission for these pulsars may come from close to the light cylinder. In addition, we were able to constrain the minimum and maximum emission altitudes with typical uncertainties of 30% of the light cylinder radius. Our results therefore describe a third gamma-ray MSP subclass, in addition to the two previously found by Venter et al.: those with LCs fit by standard OG/TPC models and those with LCs fit by pair-starved polar cap models.

  7. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  8. An enhanced sine dwell method as applied to the Galileo core structure modal survey

    NASA Technical Reports Server (NTRS)

    Smith, Kenneth S.; Trubert, Marc

    1990-01-01

    An incremental modal survey performed in 1988 on the core structure of the Galileo spacecraft with its adapters with the purpose of assessing the dynamics of the new portions of the structure is considered. Emphasis is placed on the enhancements of the sine dwell method employed in the test. For each mode, response data is acquired at 32 frequencies in a narrow band enclosing the resonance, utilizing the SWIFT technique. It is pointed out that due to the simplicity of the data processing involved, the diagnostic and modal-parameter data is available within several minutes after data acquisition; however, compared with straight curve-fitting approaches, the method requires more time for data acquisition.

  9. Flow curve analysis of a Pickering emulsion-polymerized PEDOT:PSS/PS-based electrorheological fluid

    NASA Astrophysics Data System (ADS)

    Kim, So Hee; Choi, Hyoung Jin; Leong, Yee-Kwong

    2017-11-01

    The steady shear electrorheological (ER) response of poly(3, 4-ethylenedioxythiophene): poly(styrene sulfonate)/polystyrene (PEDOT:PSS/PS) composite particles, which were initially fabricated from Pickering emulsion polymerization, was tested with a 10 vol% ER fluid dispersed in a silicone oil. The model independent shear rate and yield stress obtained from the raw torque-rotational speed data using a Couette type rotational rheometer under an applied electric field strength were then analyzed by Tikhonov regularization, which is the most suitable technique for solving an ill-posed inverse problem. The shear stress-shear rate data also fitted well with the data extracted from the Bingham fluid model.

  10. SEEK: A FORTRAN optimization program using a feasible directions gradient search

    NASA Technical Reports Server (NTRS)

    Savage, M.

    1995-01-01

    This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.

  11. Correlation Study of PVDF Membrane Morphology with Protein Adsorption: Quantitative Analysis by FTIR/ATR Technique

    NASA Astrophysics Data System (ADS)

    Ideris, N.; Ahmad, A. L.; Ooi, B. S.; Low, S. C.

    2018-05-01

    Microporous PVDF membranes were used as protein capture matrices in immunoassays. Because the most common labels in immunoassays were detected based on the colour change, an understanding of how protein concentration varies on different PVDF surfaces was needed. Herein, the correlation between the membrane pore size and protein adsorption was systematically investigated. Five different PVDF membrane morphologies were prepared and FTIR/ATR was employed to accurately quantify the surface protein concentration on membranes with small pore sizes. SigmaPlot® was used to find a suitable curve fit for protein adsorption and membrane pore size, with a high correlation coefficient, R2, of 0.9971.

  12. Anthropometric data error detecting and correction with a computer

    NASA Technical Reports Server (NTRS)

    Chesak, D. D.

    1981-01-01

    Data obtained with automated anthropometric data aquisition equipment was examined for short term errors. The least squares curve fitting technique was used to ascertain which data values were erroneous and to replace them, if possible, with corrected values. Errors were due to random reflections of light, masking of the light rays, and other types of optical and electrical interference. It was found that the signals were impossible to eliminate from the initial data produced by the television cameras, and that this was primarily a software problem requiring a digital computer to refine the data off line. The specific data of interest was related to the arm reach envelope of a human being.

  13. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  14. The effect of dimethylsulfoxide on the water transport response of rat hepatocytes during freezing.

    PubMed

    Smith, D J; Schulte, M; Bischof, J C

    1998-10-01

    Successful improvement of cryopreservation protocols for cells in suspension requires knowledge of how such cells respond to the biophysical stresses of freezing (intracellular ice formation, water transport) while in the presence of a cryoprotective agent (CPA). This work investigates the biophysical water transport response in a clinically important cell type--isolated hepatocytes--during freezing in the presence of dimethylsulfoxide (DMSO). Sprague-Dawley rat liver hepatocytes were frozen in Williams E media supplemented with 0, 1, and 2 M DMSO, at rates of 5, 10, and 50 degrees C/min. The water transport was measured by cell volumetric changes as assessed by cryomicroscopy and image analysis. Assuming that water is the only species transported under these conditions, a water transport model of the form dV/dT = f(Lpg([CPA]), ELp([CPA]), T(t)) was curve-fit to the experimental data to obtain the biophysical parameters of water transport--the reference hydraulic permeability (Lpg) and activation energy of water transport (ELp)--for each DMSO concentration. These parameters were estimated two ways: (1) by curve-fitting the model to the average volume of the pooled cell data, and (2) by curve-fitting individual cell volume data and averaging the resulting parameters. The experimental data showed that less dehydration occurs during freezing at a given rate in the presence of DMSO at temperatures between 0 and -10 degrees C. However, dehydration was able to continue at lower temperatures (< -10 degrees C) in the presence of DMSO. The values of Lpg and ELp obtained using the individual cell volume data both decreased from their non-CPA values--4.33 x 10(-13) m3/N-s (2.69 microns/min-atm) and 317 kJ/mol (75.9 kcal/mol), respectively--to 0.873 x 10(-13) m3/N-s (0.542 micron/min-atm) and 137 kJ/mol (32.8 kcal/mol), respectively, in 1 M DMSO and 0.715 x 10(-13) m3/N-s (0.444 micron/min-atm) and 107 kJ/mol (25.7 kcal/mol), respectively, in 2 M DMSO. The trends in the pooled volume values for Lpg and ELp were very similar, but the overall fit was considered worse than for the individual volume parameters. A unique way of presenting the curve-fitting results supports a clear trend of reduction of both biophysical parameters in the presence of DMSO, and no clear trend in cooling rate dependence of the biophysical parameters. In addition, these results suggest that close proximity of the experimental cell volume data to the equilibrium volume curve may significantly reduce the efficiency of the curve-fitting process.

  15. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.

  16. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  17. Comparison between two scalar field models using rotation curves of spiral galaxies

    NASA Astrophysics Data System (ADS)

    Fernández-Hernández, Lizbeth M.; Rodríguez-Meza, Mario A.; Matos, Tonatiuh

    2018-04-01

    Scalar fields have been used as candidates for dark matter in the universe, from axions with masses ∼ 10-5eV until ultra-light scalar fields with masses ∼ Axions behave as cold dark matter while the ultra-light scalar fields galaxies are Bose-Einstein condensate drops. The ultra-light scalar fields are also called scalar field dark matter model. In this work we study rotation curves for low surface brightness spiral galaxies using two scalar field models: the Gross-Pitaevskii Bose-Einstein condensate in the Thomas-Fermi approximation and a scalar field solution of the Klein-Gordon equation. We also used the zero disk approximation galaxy model where photometric data is not considered, only the scalar field dark matter model contribution to rotation curve is taken into account. From the best-fitting analysis of the galaxy catalog we use, we found the range of values of the fitting parameters: the length scale and the central density. The worst fitting results (values of χ red2 much greater than 1, on the average) were for the Thomas-Fermi models, i.e., the scalar field dark matter is better than the Thomas- Fermi approximation model to fit the rotation curves of the analysed galaxies. To complete our analysis we compute from the fitting parameters the mass of the scalar field models and two astrophysical quantities of interest, the dynamical dark matter mass within 300 pc and the characteristic central surface density of the dark matter models. We found that the value of the central mass within 300 pc is in agreement with previous reported results, that this mass is ≈ 107 M ⊙/pc2, independent of the dark matter model. And, on the contrary, the value of the characteristic central surface density do depend on the dark matter model.

  18. Protofit: A program for determining surface protonation constants from titration data

    NASA Astrophysics Data System (ADS)

    Turner, Benjamin F.; Fein, Jeremy B.

    2006-11-01

    Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.

  19. Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity

    NASA Astrophysics Data System (ADS)

    Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md

    2017-08-01

    This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.

  20. Craniofacial Reconstruction Using Rational Cubic Ball Curves

    PubMed Central

    Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan

    2015-01-01

    This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632

  1. Effect of boundary representation on viscous, separated flows in a discontinuous-Galerkin Navier-Stokes solver

    NASA Astrophysics Data System (ADS)

    Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.

    2016-08-01

    The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.

  2. Early-Time Observations of the GRB 050319 Optical Transient

    NASA Astrophysics Data System (ADS)

    Quimby, R. M.; Rykoff, E. S.; Yost, S. A.; Aharonian, F.; Akerlof, C. W.; Alatalo, K.; Ashley, M. C. B.; Göǧüş, E.; Güver, T.; Horns, D.; Kehoe, R. L.; Kιzιloǧlu, Ü.; Mckay, T. A.; Özel, M.; Phillips, A.; Schaefer, B. E.; Smith, D. A.; Swan, H. F.; Vestrand, W. T.; Wheeler, J. C.; Wren, J.

    2006-03-01

    We present the unfiltered ROTSE-III light curve of the optical transient associated with GRB 050319 beginning 4 s after the cessation of γ-ray activity. We fit a power-law function to the data using the revised trigger time given by Chincarini and coworkers, and a smoothly broken power-law to the data using the original trigger disseminated through the GCN notices. Including the RAPTOR data from Woźniak and coworkers, the best-fit power-law indices are α=-0.854+/-0.014 for the single power-law and α1=-0.364+0.020-0.019, α2=-0.881+0.030-0.031, with a break at tb=418+31-30 s for the smoothly broken fit. We discuss the fit results, with emphasis placed on the importance of knowing the true start time of the optical transient for this multipeaked burst. As Swift continues to provide prompt GRB locations, it becomes more important to answer the question, ``when does the afterglow begin?'' in order to correctly interpret the light curves.

  3. Investigation of the Failure Modes in a Metal Matrix Composite under Thermal Cycling

    DTIC Science & Technology

    1989-12-01

    Material Characteristics. . .......... ... 76 Sectioning and SEN Photograp’... . ........ . 86 Residual Stress Analysis using .TCAN ... ....... 99 i VI...Specimen Fitted with Strain Gages ..... ........... 77 39. Modulus and Poisson’s Ratio versus Thermal Cycles . . 79 1 40 Stress /Strain Curve for Uncycled...Specimen .... ......... 82 1 41. Stress /Strain Curve for Specimen 8 (5250 Cycles) ..... .83 42. Comparison of Uncycled to Cycled Stress /Strain Curves

  4. SU-F-BRD-16: Relative Biological Effectiveness of Double-Strand Break Induction for Modeling Cell Survival in Pristine Proton Beams of Different Dose-Averaged Linear Energy Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX

    2015-06-15

    Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less

  5. Rotation curve for the Milky Way galaxy in conformal gravity

    NASA Astrophysics Data System (ADS)

    O'Brien, James G.; Moss, Robert J.

    2015-05-01

    Galactic rotation curves have proven to be the testing ground for dark matter bounds in galaxies, and our own Milky Way is one of many large spiral galaxies that must follow the same models. Over the last decade, the rotation of the Milky Way galaxy has been studied and extended by many authors. Since the work of conformal gravity has now successfully fit the rotation curves of almost 140 galaxies, we present here the fit to our own Milky Way. However, the Milky Way is not just an ordinary galaxy to append to our list, but instead provides a robust test of a fundamental difference of conformal gravity rotation curves versus standard cold dark matter models. It was shown by Mannheim and O'Brien that in conformal gravity, the presence of a quadratic potential causes the rotation curve to eventually fall off after its flat portion. This effect can currently be seen in only a select few galaxies whose rotation curve is studied well beyond a few multiples of the optical galactic scale length. Due to the recent work of Sofue et al and Kundu et al, the rotation curve of the Milky Way has now been studied to a degree where we can test the predicted fall off in the conformal gravity rotation curve. We find that - like the other galaxies already studied in conformal gravity - we obtain amazing agreement with rotational data and the prediction includes the eventual fall off at large distances from the galactic center.

  6. Radio emission of SN1993J: the complete picture. II. Simultaneous fit of expansion and radio light curves

    NASA Astrophysics Data System (ADS)

    Martí-Vidal, I.; Marcaide, J. M.; Alberdi, A.; Guirado, J. C.; Pérez-Torres, M. A.; Ros, E.

    2011-02-01

    We report on a simultaneous modelling of the expansion and radio light curves of the supernova SN1993J. We developed a simulation code capable of generating synthetic expansion and radio light curves of supernovae by taking into consideration the evolution of the expanding shock, magnetic fields, and relativistic electrons, as well as the finite sensitivity of the interferometric arrays used in the observations. Our software successfully fits all the available radio data of SN 1993J with a standard emission model for supernovae, which is extended with some physical considerations, such as an evolution in the opacity of the ejecta material, a radial decline in the magnetic fields within the radiating region, and a changing radial density profile for the circumstellar medium starting from day 3100 after the explosion.

  7. POSSIBLE TRANSIT TIMING VARIATIONS OF THE TrES-3 PLANETARY SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Ing-Guey; Wu, Yu-Ting; Chien, Ping

    2013-03-15

    Five newly observed transit light curves of the TrES-3 planetary system are presented. Together with other light-curve data from the literature, 23 transit light curves in total, which cover an overall timescale of 911 epochs, have been analyzed through a standard procedure. From these observational data, the system's orbital parameters are determined and possible transit timing variations (TTVs) are investigated. Given that a null TTV produces a fit with reduced {chi}{sup 2} = 1.52, our results agree with previous work, that TTVs might not exist in these data. However, a one-frequency oscillating TTV model, giving a fit with a reducedmore » {chi}{sup 2} = 0.93, does possess a statistically higher probability. It is thus concluded that future observations and dynamical simulations for this planetary system will be very important.« less

  8. Validity of the Male Depression Risk Scale in a representative Canadian sample: sensitivity and specificity in identifying men with recent suicide attempt.

    PubMed

    Rice, Simon M; Ogrodniczuk, John S; Kealy, David; Seidler, Zac E; Dhillon, Haryana M; Oliffe, John L

    2017-12-22

    Clinical practice and literature has supported the existence of a phenotypic sub-type of depression in men. While a number of self-report rating scales have been developed in order to empirically test the male depression construct, psychometric validation of these scales is limited. To confirm the psychometric properties of the multidimensional Male Depression Risk Scale (MDRS-22) and to develop clinical cut-off scores for the MDRS-22. Data were obtained from an online sample of 1000 Canadian men (median age (M) = 49.63, standard deviation (SD) = 14.60). Confirmatory factor analysis (CFA) was used to replicate the established six-factor model of the MDRS-22. Psychometric values of the MDRS subscales were comparable to the widely used Patient Health Questionnaire-9. CFA model fit indices indicated adequate model fit for the six-factor MDRS-22 model. ROC curve analysis indicated the MDRS-22 was effective for identifying those with a recent (previous four-weeks) suicide attempt (area under curve (AUC) values = 0.837). The MDRS-22 cut-off identified proportionally more (84.62%) cases of recent suicide attempt relative to the PHQ-9 moderate range (53.85%). The MDRS-22 is the first male-sensitive depression scale to be psychometrically validated using CFA techniques in independent and cross-nation samples. Additional studies should identify differential item functioning and evaluate cross-cultural effects.

  9. Radial dependence of the dark matter distribution in M33

    NASA Astrophysics Data System (ADS)

    López Fune, E.; Salucci, P.; Corbelli, E.

    2017-06-01

    The stellar and gaseous mass distributions, as well as the extended rotation curve, in the nearby galaxy M33 are used to derive the radial distribution of dark matter density in the halo and to test cosmological models of galaxy formation and evolution. Two methods are examined to constrain the dark mass density profiles. The first method deals directly with fitting the rotation curve data in the range of galactocentric distances 0.24 ≤ r ≤ 22.72 kpc. Using the results of collisionless Λ cold dark matter numerical simulations, we confirm that the Navarro-Frenkel-White (NFW) dark matter profile provides a better fit to the rotation curve data than the cored Burkert profile (BRK) profile. The second method relies on the local equation of centrifugal equilibrium and on the rotation curve slope. In the aforementioned range of distances, we fit the observed velocity profile, using a function that has a rational dependence on the radius, and we derive the slope of the rotation curve. Then, we infer the effective matter densities. In the radial range 9.53 ≤ r ≤ 22.72 kpc, the uncertainties induced by the luminous matter (stars and gas) become negligible, because the dark matter density dominates, and we can determine locally the radial distribution of dark matter. With this second method, we tested the NFW and BRK dark matter profiles and we can confirm that both profiles are compatible with the data, even though in this case the cored BRK density profile provides a more reasonable value for the baryonic-to-dark matter ratio.

  10. Data Validation in the Kepler Science Operations Center Pipeline

    NASA Technical Reports Server (NTRS)

    Wu, Hayley; Twicken, Joseph D.; Tenenbaum, Peter; Clarke, Bruce D.; Li, Jie; Quintana, Elisa V.; Allen, Christopher; Chandrasekaran, Hema; Jenkins, Jon M.; Caldwell, Douglas A.; hide

    2010-01-01

    We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed. Keywords: photometry, data validation, Kepler, Earth-size planets

  11. The mass of the black hole in 1A 0620-00, revisiting the ellipsoidal light curve modelling

    NASA Astrophysics Data System (ADS)

    van Grunsven, Theo F. J.; Jonker, Peter G.; Verbunt, Frank W. M.; Robinson, Edward L.

    2017-12-01

    The mass distribution of stellar-mass black holes can provide important clues to supernova modelling, but observationally it is still ill constrained. Therefore, it is of importance to make black hole mass measurements as accurate as possible. The X-ray transient 1A 0620-00 is well studied, with a published black hole mass of 6.61 ± 0.25 M⊙, based on an orbital inclination i of 51.0° ± 0.9°. This was obtained by Cantrell et al. (2010) as an average of independent fits to V-, I- and H-band light curves. In this work, we perform an independent check on the value of i by re-analysing existing YALO/SMARTS V-, I- and H-band photometry, using different modelling software and fitting strategy. Performing a fit to the three light curves simultaneously, we obtain a value for i of 54.1° ± 1.1°, resulting in a black hole mass of 5.86 ± 0.24 M⊙. Applying the same model to the light curves individually, we obtain 58.2° ± 1.9°, 53.6° ± 1.6° and 50.5° ± 2.2° for V-, I- and H-band, respectively, where the differences in best-fitting i are caused by the contribution of the residual accretion disc light in the three different bands. We conclude that the mass determination of this black hole may still be subject to systematic effects exceeding the statistical uncertainty. Obtaining more accurate masses would be greatly helped by continuous phase-resolved spectroscopic observations simultaneous with photometry.

  12. Using a constrained formulation based on probability summation to fit receiver operating characteristic (ROC) curves

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Good, Walter F.; Gur, David

    2000-04-01

    A constrained ROC formulation from probability summation is proposed for measuring observer performance in detecting abnormal findings on medical images. This assumes the observer's detection or rating decision on each image is determined by a latent variable that characterizes the specific finding (type and location) considered most likely to be a target abnormality. For positive cases, this 'maximum- suspicion' variable is assumed to be either the value for the actual target or for the most suspicious non-target finding, whichever is the greater (more suspicious). Unlike the usual ROC formulation, this constrained formulation guarantees a 'well-behaved' ROC curve that always equals or exceeds chance- level decisions and cannot exhibit an upward 'hook.' Its estimated parameters specify the accuracy for separating positive from negative cases, and they also predict accuracy in locating or identifying the actual abnormal findings. The present maximum-likelihood procedure (runs on PC with Windows 95 or NT) fits this constrained formulation to rating-ROC data using normal distributions with two free parameters. Fits of the conventional and constrained ROC formulations are compared for continuous and discrete-scale ratings of chest films in a variety of detection problems, both for localized lesions (nodules, rib fractures) and for diffuse abnormalities (interstitial disease, infiltrates or pnumothorax). The two fitted ROC curves are nearly identical unless the conventional ROC has an ill behaved 'hook,' below the constrained ROC.

  13. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han's model for rubber vulcanization

    NASA Astrophysics Data System (ADS)

    Milani, G.; Milani, F.

    A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.

  14. Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggieri, Ruggero, E-mail: ruggieri.ruggero@gmail.com; Stavreva, Nadejda; Naccarato, Stefania

    2012-08-01

    Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as duemore » to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.« less

  15. Mathematical and Statistical Software Index.

    DTIC Science & Technology

    1986-08-01

    geometric) mean HMEAN - harmonic mean MEDIAN - median MODE - mode QUANT - quantiles OGIVE - distribution curve IQRNG - interpercentile range RANGE ... range mutliphase pivoting algorithm cross-classification multiple discriminant analysis cross-tabul ation mul tipl e-objecti ve model curve fitting...Statistics). .. .. .... ...... ..... ...... ..... .. 21 *RANGEX (Correct Correlations for Curtailment of Range ). .. .. .... ...... ... 21 *RUMMAGE II (Analysis

  16. A Software Tool for the Rapid Analysis of the Sintering Behavior of Particulate Bodies

    DTIC Science & Technology

    2017-11-01

    bounded by a region that the user selects via cross hairs . Future plot analysis features, such as more complicated curve fitting and modeling functions...German RM. Grain growth behavior of tungsten heavy alloys based on the master sintering curve concept. Metallurgical and Materials Transactions A

  17. COMPARING BEHAVIORAL DOSE-EFFECT CURVES FOR HUMANS AND LABORATORY ANIMALS ACUTELY EXPOSED TO TOLUENE.

    EPA Science Inventory

    The utility of laboratory animal data in toxicology depends upon the ability to generalize the results quantitatively to humans. To compare the acute behavioral effects of inhaled toluene in humans to those in animals, dose-effect curves were fitted by meta-analysis of published...

  18. Annual variation in the atmospheric radon concentration in Japan.

    PubMed

    Kobayashi, Yuka; Yasuoka, Yumi; Omori, Yasutaka; Nagahama, Hiroyuki; Sanada, Tetsuya; Muto, Jun; Suzuki, Toshiyuki; Homma, Yoshimi; Ihara, Hayato; Kubota, Kazuhito; Mukai, Takahiro

    2015-08-01

    Anomalous atmospheric variations in radon related to earthquakes have been observed in hourly exhaust-monitoring data from radioisotope institutes in Japan. The extraction of seismic anomalous radon variations would be greatly aided by understanding the normal pattern of variation in radon concentrations. Using atmospheric daily minimum radon concentration data from five sampling sites, we show that a sinusoidal regression curve can be fitted to the data. In addition, we identify areas where the atmospheric radon variation is significantly affected by the variation in atmospheric turbulence and the onshore-offshore pattern of Asian monsoons. Furthermore, by comparing the sinusoidal regression curve for the normal annual (seasonal) variations at the five sites to the sinusoidal regression curve for a previously published dataset of radon values at the five Japanese prefectures, we can estimate the normal annual variation pattern. By fitting sinusoidal regression curves to the previously published dataset containing sites in all Japanese prefectures, we find that 72% of the Japanese prefectures satisfy the requirements of the sinusoidal regression curve pattern. Using the normal annual variation pattern of atmospheric daily minimum radon concentration data, these prefectures are suitable areas for obtaining anomalous radon variations related to earthquakes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Assessment of Shape Changes of Mistletoe Berries: A New Software Approach to Automatize the Parameterization of Path Curve Shaped Contours

    PubMed Central

    Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan

    2013-01-01

    Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255

  20. A mathematical function for the description of nutrient-response curve

    PubMed Central

    Ahmadi, Hamed

    2017-01-01

    Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271

  1. Comparative Evaluation of Conventional and Accelerated Castings on Marginal Fit and Surface Roughness.

    PubMed

    Jadhav, Vivek Dattatray; Motwani, Bhagwan K; Shinde, Jitendra; Adhapure, Prasad

    2017-01-01

    The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. The results of the t -test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness.

  2. Time-Frequency Analysis of the Dispersion of Lamb Modes

    NASA Technical Reports Server (NTRS)

    Prosser, W. H.; Seale, Michael D.; Smith, Barry T.

    1999-01-01

    Accurate knowledge of the velocity dispersion of Lamb modes is important for ultrasonic nondestructive evaluation methods used in detecting and locating flaws in thin plates and in determining their elastic stiffness coefficients. Lamb mode dispersion is also important in the acoustic emission technique for accurately triangulating the location of emissions in thin plates. In this research, the ability to characterize Lamb mode dispersion through a time-frequency analysis (the pseudo Wigner-Ville distribution) was demonstrated. A major advantage of time-frequency methods is the ability to analyze acoustic signals containing multiple propagation modes, which overlap and superimpose in the time domain signal. By combining time-frequency analysis with a broadband acoustic excitation source, the dispersion of multiple Lamb modes over a wide frequency range can be determined from as little as a single measurement. In addition, the technique provides a direct measurement of the group velocity dispersion. The technique was first demonstrated in the analysis of a simulated waveform in an aluminum plate in which the Lamb mode dispersion was well known. Portions of the dispersion curves of the A(sub 0), A(sub 1), S(sub 0), and S(sub 2)Lamb modes were obtained from this one waveform. The technique was also applied for the analysis of experimental waveforms from a unidirectional graphite/epoxy composite plate. Measurements were made both along, and perpendicular to the fiber direction. In this case, the signals contained only the lowest order symmetric and antisymmetric modes. A least squares fit of the results from several source to detector distances was used. Theoretical dispersion curves were calculated and are shown to be in good agreement with experimental results.

  3. High pressure melting curve of platinum up to 35 GPa

    NASA Astrophysics Data System (ADS)

    Patel, Nishant N.; Sunder, Meenakshi

    2018-04-01

    Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.

  4. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  5. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  6. Fast Decomposition of Three-Component Spectra of Fluorescence Quenching by White and Grey Methods of Data Modeling.

    PubMed

    Kałka, Andrzej J; Turek, Andrzej M

    2018-04-03

    'White' and 'grey' methods of data modeling have been employed to resolve the heterogeneous fluorescence from a fluorophore mixture of 9-cyanoanthracene (CNA), 10-chloro-9-cyanoanthracene (ClCNA) and 9,10-dicyanoanthracene (DCNA) into component individual fluorescence spectra. The three-component spectra of fluorescence quenching in methanol were recorded for increasing amounts of lithium bromide used as a quencher. The associated intensity decay profiles of differentially quenched fluorescence of single components were modeled on the basis of a linear Stern-Volmer plot. These profiles are necessary to initiate the fitting procedure in both 'white' and 'grey' modeling of the original data matrices. 'White' methods of data modeling, called also 'hard' methods, are based on chemical/physical laws expressed in terms of some well-known or generally accepted mathematical equations. The parameters of these models are not known and they are estimated by least squares curve fitting. 'Grey' approaches to data modeling, also known as hard-soft modeling techniques, make use of both hard-model and soft-model parts. In practice, the difference between 'white' and 'grey' methods lies in the way in which the 'crude' fluorescence intensity decays of the mixture components are estimated. In the former case they are given in a functional form while in the latter as digitized curves which, in general, can only be obtained by using dedicated techniques of factor analysis. In the paper, the initial values of the Stern-Volmer constants of pure components were evaluated by both 'point-by-point' and 'matrix' versions of the method making use of the concept of wavelength dependent intensity fractions as well as by the rank annihilation factor analysis applied to the data matrices of the difference fluorescence spectra constructed in two ways: from the spectra recorded for a few excitation lines at the same concentration of a fluorescence quencher or classically from a series of the spectra measured for one selected excitation line but for increasing concentration of the quencher. The results of multiple curve resolution obtained by all types of the applied methods have been scrutinized and compared. In addition, the effect of inadequacy of sample preparation and increasing instrumental noise on the shape of the resolved spectral profiles has been studied on several datasets mimicking the measured data matrices. Graphical Abstract ᅟ.

  7. [Keratoconus special soft contact lens fitting].

    PubMed

    Yamazaki, Ester Sakae; da Silva, Vanessa Cristina Batista; Morimitsu, Vagner; Sobrinho, Marcelo; Fukushima, Nelson; Lipener, César

    2006-01-01

    To evaluate the fitting and use of a soft contact lens in keratoconic patients. Retrospective study on 80 eyes of 66 patients, fitted with a special soft contact lens for keratoconus, at the Contact Lens Section of UNIFESP and private clinics. Keratoconus was classified according to degrees of disease severity by keratometric pattern. Age, gender, diagnosis, keratometry, visual acuity, spherical equivalent (SE), base curve and clinical indication were recorded. Of 66 patients (80 eyes) with keratoconus the mean age was 29 years, 51.5% were men and 48.5% women. According to the groups: 15.0% were incipient, 53.7% moderate, 26.3% advanced and 5.0% were severe. The majority of the eyes of patients using contact lenses (91.25%) achieved visual acuity better than 20/40. To 88 eyes 58% were tihed with lens with spherical power (mean -5.45 diopters) and 41% with spherocylinder power (from -0.5 to -5.00 cylindrical diopters). The most frequent base curve was 7.6 in 61% of the eyes. The main reasons for this special lens fitting were due to reduced tolerance and poor fitting pattern achieved with other lenses. The special soft contact lens is useful in fitting difficult keratoconic patients by offering comfort and improving visual rehabilitation that may allow more patients to postpone the need for corneal transplant.

  8. Why "suboptimal" is optimal: Jensen's inequality and ectotherm thermal preferences.

    PubMed

    Martin, Tara Laine; Huey, Raymond B

    2008-03-01

    Body temperature (T(b)) profoundly affects the fitness of ectotherms. Many ectotherms use behavior to control T(b) within narrow levels. These temperatures are assumed to be optimal and therefore to match body temperatures (Trmax) that maximize fitness (r). We develop an optimality model and find that optimal body temperature (T(o)) should not be centered at Trmax but shifted to a lower temperature. This finding seems paradoxical but results from two considerations relating to Jensen's inequality, which deals with how variance and skew influence integrals of nonlinear functions. First, ectotherms are not perfect thermoregulators and so experience a range of T(b). Second, temperature-fitness curves are asymmetric, such that a T(b) higher than Trmax depresses fitness more than will a T(b) displaced an equivalent amount below Trmax. Our model makes several predictions. The magnitude of the optimal shift (Trmax - To) should increase with the degree of asymmetry of temperature-fitness curves and with T(b) variance. Deviations should be relatively large for thermal specialists but insensitive to whether fitness increases with Trmax ("hotter is better"). Asymmetric (left-skewed) T(b) distributions reduce the magnitude of the optimal shift but do not eliminate it. Comparative data (insects, lizards) support key predictions. Thus, "suboptimal" is optimal.

  9. Recognizing Physisorption and Chemisorption in Carbon Nanotubes Gas Sensors by Double Exponential Fitting of the Response.

    PubMed

    Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio

    2016-05-19

    Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.

  10. Micromagnetic measurement for characterization of ferromagnetic materials' microstructural properties

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming

    2018-05-01

    Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.

  11. The light curve of SN 1987A revisited: constraining production masses of radioactive nuclides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seitenzahl, Ivo R.; Timmes, F. X.; Magkotsios, Georgios, E-mail: ivo.seitenzahl@anu.edu.au

    2014-09-01

    We revisit the evidence for the contribution of the long-lived radioactive nuclides {sup 44}Ti, {sup 55}Fe, {sup 56}Co, {sup 57}Co, and {sup 60}Co to the UVOIR light curve of SN 1987A. We show that the V-band luminosity constitutes a roughly constant fraction of the bolometric luminosity between 900 and 1900 days, and we obtain an approximate bolometric light curve out to 4334 days by scaling the late time V-band data by a constant factor where no bolometric light curve data is available. Considering the five most relevant decay chains starting at {sup 44}Ti, {sup 55}Co, {sup 56}Ni, {sup 57}Ni, andmore » {sup 60}Co, we perform a least squares fit to the constructed composite bolometric light curve. For the nickel isotopes, we obtain best fit values of M({sup 56}Ni) = (7.1 ± 0.3) × 10{sup –2} M {sub ☉} and M({sup 57}Ni) = (4.1 ± 1.8) × 10{sup –3} M {sub ☉}. Our best fit {sup 44}Ti mass is M({sup 44}Ti) = (0.55 ± 0.17) × 10{sup –4} M {sub ☉}, which is in disagreement with the much higher (3.1 ± 0.8) × 10{sup –4} M {sub ☉} recently derived from INTEGRAL observations. The associated uncertainties far exceed the best fit values for {sup 55}Co and {sup 60}Co and, as a result, we only give upper limits on the production masses of M({sup 55}Co) < 7.2 × 10{sup –3} M {sub ☉} and M({sup 60}Co) < 1.7 × 10{sup –4} M {sub ☉}. Furthermore, we find that the leptonic channels in the decay of {sup 57}Co (internal conversion and Auger electrons) are a significant contribution and constitute up to 15.5% of the total luminosity. Consideration of the kinetic energy of these electrons is essential in lowering our best fit nickel isotope production ratio to [{sup 57}Ni/{sup 56}Ni] = 2.5 ± 1.1, which is still somewhat high but is in agreement with gamma-ray observations and model predictions.« less

  12. Bayesian investigation of isochrone consistency using the old open cluster NGC 188

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Shane; Courteau, Stéphane; Von Hippel, Ted

    2015-03-01

    This paper provides a detailed comparison of the differences in parameters derived for a star cluster from its color–magnitude diagrams (CMDs) depending on the filters and models used. We examine the consistency and reliability of fitting three widely used stellar evolution models to 15 combinations of optical and near-IR photometry for the old open cluster NGC 188. The optical filter response curves match those of theoretical systems and are thus not the source of fit inconsistencies. NGC 188 is ideally suited to this study thanks to a wide variety of high-quality photometry and available proper motions and radial velocities thatmore » enable us to remove non-cluster members and many binaries. Our Bayesian fitting technique yields inferred values of age, metallicity, distance modulus, and absorption as a function of the photometric band combinations and stellar models. We show that the historically favored three-band combinations of UBV and VRI can be meaningfully inconsistent with each other and with longer baseline data sets such as UBVRIJHK{sub S}. Differences among model sets can also be substantial. For instance, fitting Yi et al. (2001) and Dotter et al. (2008) models to UBVRIJHK{sub S} photometry for NGC 188 yields the following cluster parameters: age = (5.78 ± 0.03, 6.45 ± 0.04) Gyr, [Fe/H] = (+0.125 ± 0.003, −0.077 ± 0.003) dex, (m−M){sub V} = (11.441 ± 0.007, 11.525 ± 0.005) mag, and A{sub V} = (0.162 ± 0.003, 0.236 ± 0.003) mag, respectively. Within the formal fitting errors, these two fits are substantially and statistically different. Such differences among fits using different filters and models are a cautionary tale regarding our current ability to fit star cluster CMDs. Additional modeling of this kind, with more models and star clusters, and future Gaia parallaxes are critical for isolating and quantifying the most relevant uncertainties in stellar evolutionary models.« less

  13. The Carnegie Supernova Project I. Analysis of stripped-envelope supernova light curves

    NASA Astrophysics Data System (ADS)

    Taddia, F.; Stritzinger, M. D.; Bersten, M.; Baron, E.; Burns, C.; Contreras, C.; Holmbo, S.; Hsiao, E. Y.; Morrell, N.; Phillips, M. M.; Sollerman, J.; Suntzeff, N. B.

    2018-02-01

    Stripped-envelope (SE) supernovae (SNe) include H-poor (Type IIb), H-free (Type Ib), and He-free (Type Ic) events thought to be associated with the deaths of massive stars. The exact nature of their progenitors is a matter of debate with several lines of evidence pointing towards intermediate mass (Minit< 20 M⊙) stars in binary systems, while in other cases they may be linked to single massive Wolf-Rayet stars. Here we present the analysis of the light curves of 34 SE SNe published by the Carnegie Supernova Project (CSP-I) that are unparalleled in terms of photometric accuracy and wavelength range. Light-curve parameters are estimated through the fits of an analytical function and trends are searched for among the resulting fit parameters. Detailed inspection of the dataset suggests a tentative correlation between the peak absolute B-band magnitude and Δm15(B), while the post maximum light curves reveals a correlation between the late-time linear slope and Δm15. Making use of the full set of optical and near-IR photometry, combined with robust host-galaxy extinction corrections, comprehensive bolometric light curves are constructed and compared to both analytic and hydrodynamical models. This analysis finds consistent results among the two different modeling techniques and from the hydrodynamical models we obtained ejecta masses of 1.1-6.2M⊙, 56Ni masses of 0.03-0.35M⊙, and explosion energies (excluding two SNe Ic-BL) of 0.25-3.0 × 1051 erg. Our analysis indicates that adopting κ = 0.07 cm2 g-1 as the mean opacity serves to be a suitable assumption when comparing Arnett-model results to those obtained from hydrodynamical calculations. We also find that adopting He I and O I line velocities to infer the expansion velocity in He-rich and He-poor SNe, respectively, provides ejecta masses relatively similar to those obtained by using the Fe II line velocities, although the use of Fe II as a diagnostic does imply higher explosion energies. The inferred range of ejecta masses are compatible with intermediate mass (MZAMS ≤ 20M⊙) progenitor stars in binary systems for the majority of SE SNe. Furthermore, our hydrodynamical modeling of the bolometric light curves suggests a significant fraction of the sample may have experienced significant mixing of 56Ni, particularly in the case of SNe Ic. Based on observations collected at Las Campanas Observatory.Bolometric light curve tables are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/609/A136

  14. Multivariate curve resolution-alternating least squares and kinetic modeling applied to near-infrared data from curing reactions of epoxy resins: mechanistic approach and estimation of kinetic rate constants.

    PubMed

    Garrido, M; Larrechi, M S; Rius, F X

    2006-02-01

    This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.

  15. Nonlinear model identification and spectral submanifolds for multi-degree-of-freedom mechanical vibrations

    NASA Astrophysics Data System (ADS)

    Szalai, Robert; Ehrhardt, David; Haller, George

    2017-06-01

    In a nonlinear oscillatory system, spectral submanifolds (SSMs) are the smoothest invariant manifolds tangent to linear modal subspaces of an equilibrium. Amplitude-frequency plots of the dynamics on SSMs provide the classic backbone curves sought in experimental nonlinear model identification. We develop here, a methodology to compute analytically both the shape of SSMs and their corresponding backbone curves from a data-assimilating model fitted to experimental vibration signals. This model identification utilizes Taken's delay-embedding theorem, as well as a least square fit to the Taylor expansion of the sampling map associated with that embedding. The SSMs are then constructed for the sampling map using the parametrization method for invariant manifolds, which assumes that the manifold is an embedding of, rather than a graph over, a spectral subspace. Using examples of both synthetic and real experimental data, we demonstrate that this approach reproduces backbone curves with high accuracy.

  16. Study on efficiency droop in InGaN/GaN light-emitting diodes based on differential carrier lifetime analysis

    NASA Astrophysics Data System (ADS)

    Meng, Xiao; Wang, Lai; Hao, Zhibiao; Luo, Yi; Sun, Changzheng; Han, Yanjun; Xiong, Bing; Wang, Jian; Li, Hongtao

    2016-01-01

    Efficiency droop is currently one of the most popular research problems for GaN-based light-emitting diodes (LEDs). In this work, a differential carrier lifetime measurement system is optimized to accurately determine carrier lifetimes (τ) of blue and green LEDs under different injection current (I). By fitting the τ-I curves and the efficiency droop curves of the LEDs according to the ABC carrier rate equation model, the impact of Auger recombination and carrier leakage on efficiency droop can be characterized simultaneously. For the samples used in this work, it is found that the experimental τ-I curves cannot be described by Auger recombination alone. Instead, satisfactory fitting results are obtained by taking both carrier leakage and carriers delocalization into account, which implies carrier leakage plays a more significant role in efficiency droop at high injection level.

  17. ROC analysis of diagnostic performance in liver scintigraphy.

    PubMed

    Fritz, S L; Preston, D F; Gallagher, J H

    1981-02-01

    Studies on the accuracy of liver scintigraphy for the detection of metastases were assembled from 38 sources in the medical literature. An ROC curve was fitted to the observed values of sensitivity and specificity using an algorithm developed by Ogilvie and Creelman. This ROC curve fitted the data better than average sensitivity and specificity values in each of four subsets of the data. For the subset dealing with Tc-99m sulfur colloid scintigraphy, performed for detection of suspected metastases and containing data on 2800 scans from 17 independent series, it was not possible to reject the hypothesis that interobserver variation was entirely due to the use of different decision thresholds by the reporting clinicians. Thus the ROC curve obtained is a reasonable baseline estimate of the performance potentially achievable in today's clinical setting. Comparison of new reports with these data is possible, but is limited by the small sample sizes in most reported series.

  18. A quantitative analysis of the effects of 2,3-diphosphoglycerate, adenosine triphosphate and inositol hexaphosphate on the oxygen dissociation curve of human haemoglobin.

    PubMed Central

    Goodford, P J; St-Louis, J; Wootton, R

    1978-01-01

    1. Oxygen dissociation curves have been measured for human haemoglobin solutions with different concentrations of the allosteric effectors 2,3-diphosphoglycerate, adenosine triphosphate and inositol hexaphosphate. 2. Each effector produces a concentration dependent right shift of the oxygen dissociation curve, but a point is reached where the shift is maximal and increasing the effector concentration has no further effect. 3. Mathematical models based on the Monod, Wyman & Changeux (1965) treatment of allosteric proteins have been fitted to the data. For each compound the simple two-state model and its extension to take account of subunit inequivalence were shown to be inadequate, and a better fit was obtained by allowing the effector to lower the oxygen affinity of the deoxy conformational state as well as binding preferentially to this conformation. PMID:722582

  19. A systematic evaluation of contemporary impurity correction methods in ITS-90 aluminium fixed point cells

    NASA Astrophysics Data System (ADS)

    da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham

    2017-06-01

    The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.

  20. An advanced approach for computer modeling and prototyping of the human tooth.

    PubMed

    Chang, Kuang-Hua; Magdum, Sheetalkumar; Khera, Satish C; Goel, Vijay K

    2003-05-01

    This paper presents a systematic and practical method for constructing accurate computer and physical models that can be employed for the study of human tooth mechanics. The proposed method starts with a histological section preparation of a human tooth. Through tracing outlines of the tooth on the sections, discrete points are obtained and are employed to construct B-spline curves that represent the exterior contours and dentino-enamel junction (DEJ) of the tooth using a least square curve fitting technique. The surface skinning technique is then employed to quilt the B-spline curves to create a smooth boundary and DEJ of the tooth using B-spline surfaces. These surfaces are respectively imported into SolidWorks via its application protocol interface to create solid models. The solid models are then imported into Pro/MECHANICA Structure for finite element analysis (FEA). The major advantage of the proposed method is that it first generates smooth solid models, instead of finite element models in discretized form. As a result, a more advanced p-FEA can be employed for structural analysis, which usually provides superior results to traditional h-FEA. In addition, the solid model constructed is smooth and can be fabricated with various scales using the solid freeform fabrication technology. This method is especially useful in supporting bioengineering applications, where the shape of the object is usually complicated. A human maxillary second molar is presented to illustrate and demonstrate the proposed method. Note that both the solid and p-FEA models of the molar are presented. However, comparison between p- and h-FEA models is out of the scope of the paper.

  1. Comparative Evaluation of Conventional and Accelerated Castings on Marginal Fit and Surface Roughness

    PubMed Central

    Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad

    2017-01-01

    Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726

  2. Early loosening of a press-fit cup with ceramic-on-ceramic articulation: our early results.

    PubMed

    Haverkamp, Daniël; Westerbos, Stijn; Campo, Martin M; Boonstra, Ritsert H; Rob Albers, G H; van der Vis, Harm M

    2013-12-01

    In this study, we present the short-term results of the Selexys TH+ cup with the Ceramys inlay which is a press-fit cup with a ceramic-on-ceramic articulation. (Mathys, Bettlach, Switzerland). We compared the results with a retrospective-matched control group with a Delta PF cup (Lima, Udine, Italy), which is also a press-fit cup with a ceramic-on-ceramic articulation. 257 elective hip arthroplasties with the Selexys TH+ cup in 250 patients placed in 2009 and 2010 were analyzed and compared with a control group retrospective analysis of the uncemented Delta PF cup (Lima, Udine, Italy) placed in 2007 and 2008 in 208 patients (222 hips). Surgical technique and surgeons were identical in both groups. During a follow-up period of 3-21 months, 19 aseptic loosenings (7.4 %) were found for the Selexys TH+ cup. The survival plotted by a Kaplan-Meier curve shows a 1-year survival of 87.4 %. The Lima Delta PF cup showed a 1-year survival of 99.5 %. Failure analysis showed no clear explanation for this early loosening. The Selexys TH+ cup combined with the Ceramys ceramic-on-ceramic inlay coupling show an unacceptable high early revision rate. Therefore, we advice against using this combination.

  3. A parameter estimation technique for stochastic self-assembly systems and its application to human papillomavirus self-assembly.

    PubMed

    Kumar, M Senthil; Schwartz, Russell

    2010-12-09

    Virus capsid assembly has been a key model system for studies of complex self-assembly but it does pose some significant challenges for modeling studies. One important limitation is the difficulty of determining accurate rate parameters. The large size and rapid assembly of typical viruses make it infeasible to directly measure coat protein binding rates or deduce them from the relatively indirect experimental measures available. In this work, we develop a computational strategy to deduce coat-coat binding rate parameters for viral capsid assembly systems by fitting stochastic simulation trajectories to experimental measures of assembly progress. Our method combines quadratic response surface and quasi-gradient descent approximations to deal with the high computational cost of simulations, stochastic noise in simulation trajectories and limitations of the available experimental data. The approach is demonstrated on a light scattering trajectory for a human papillomavirus (HPV) in vitro assembly system, showing that the method can provide rate parameters that produce accurate curve fits and are in good concordance with prior analysis of the data. These fits provide an insight into potential assembly mechanisms of the in vitro system and give a basis for exploring how these mechanisms might vary between in vitro and in vivo assembly conditions.

  4. A parameter estimation technique for stochastic self-assembly systems and its application to human papillomavirus self-assembly

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, M.; Schwartz, Russell

    2010-12-01

    Virus capsid assembly has been a key model system for studies of complex self-assembly but it does pose some significant challenges for modeling studies. One important limitation is the difficulty of determining accurate rate parameters. The large size and rapid assembly of typical viruses make it infeasible to directly measure coat protein binding rates or deduce them from the relatively indirect experimental measures available. In this work, we develop a computational strategy to deduce coat-coat binding rate parameters for viral capsid assembly systems by fitting stochastic simulation trajectories to experimental measures of assembly progress. Our method combines quadratic response surface and quasi-gradient descent approximations to deal with the high computational cost of simulations, stochastic noise in simulation trajectories and limitations of the available experimental data. The approach is demonstrated on a light scattering trajectory for a human papillomavirus (HPV) in vitro assembly system, showing that the method can provide rate parameters that produce accurate curve fits and are in good concordance with prior analysis of the data. These fits provide an insight into potential assembly mechanisms of the in vitro system and give a basis for exploring how these mechanisms might vary between in vitro and in vivo assembly conditions.

  5. Single isotope evaluation of pulmonary capillary protein leak (ARDS model) using computerized gamma scintigraphy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tatum, J.L.; Strash, A.M.; Sugerman, H.J.

    Using a canine oleic acid model, a computerized gamma scintigraphic technique was evaluated to determine 1) ability to detect pulmonary capillary protein leak in a model temporally consistent with clinical adult respiratory distress syndrome (ARDS), 2) the possibility of providing a quantitative index of leak, and 3) the feasibility of closely spaced repeat evaluations. Study animals received oleic acid (controls, n . 10; 0.05 ml/kg, n . 10; 0.10 ml/kg, n . 12; 0.15 ml/kg, n . 6) 3 hours prior to a tracer dose of technetium-99m (/sup 99/mTc) HSA. One animal in each dose group also received two repeatmore » tracer injections spaced a minimum of 45 minutes apart. Digital images were obtained with a conventional gamma camera interfaced to a dedicated medical computer. Lung: heart ratio versus time curves were generated, and a slope index was calculated for each curve. Slope index values for all doses were significantly greater than control values (P(t) less than 0.0001). Each incremental dose increase was also significantly greater than the previous dose level. Oleic acid dose versus slope index fitted a linear regression model with r . 0.94. Repeat dosing produced index values with standard deviations less than the group sample standard deviations. We feel this technique may have application in the clinical study of pulmonary permeability edema.« less

  6. Diagnostic Techniques to Elucidate the Aerodynamic Performance of Acoustic Liners

    NASA Technical Reports Server (NTRS)

    June, Jason; Bertolucci, Brandon; Ukeiley, Lawrence; Cattafesta, Louis N., III; Sheplak, Mark

    2017-01-01

    In support of Topic A.2.8 of NASA NRA NNH10ZEA001N, the University of Florida (UF) has investigated the use of flow field optical diagnostic and micromachined sensor-based techniques for assessing the wall shear stress on an acoustic liner. Stereoscopic particle image velocimetry (sPIV) was used to study the velocity field over a liner in the Grazing Flow Impedance Duct (GFID). The results indicate that the use of a control volume based method to determine the wall shear stress is prone to significant error. The skin friction over the liner as measured using velocity curve fitting techniques was shown to be locally reduced behind an orifice, relative to the hard wall case in a streamwise plane centered on the orifice. The capacitive wall shear stress sensor exhibited a linear response for a range of shear stresses over a hard wall. PIV over the liner is consistent with lifting of the near wall turbulent structure as it passes over an orifice, followed by a region of low wall shear stress.

  7. Strain gage selection in loads equations using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.

  8. General circular velocity relation of a test particle in a 3D gravitational potential: application to the rotation curves analysis and total mass determination of UGC 8490 and UGC 9753

    NASA Astrophysics Data System (ADS)

    Repetto, P.; Martínez-García, E. E.; Rosado, M.; Gabbasov, R.

    2018-06-01

    In this paper, we derive a novel circular velocity relation for a test particle in a 3D gravitational potential applicable to every system of curvilinear coordinates, suitable to be reduced to orthogonal form. As an illustration of the potentiality of the determined circular velocity expression, we perform the rotation curves analysis of UGC 8490 and UGC 9753 and we estimate the total and dark matter mass of these two galaxies under the assumption that their respective dark matter haloes have spherical, prolate, and oblate spheroidal mass distributions. We employ stellar population synthesis models and the total H I density map to obtain the stellar and H I+He+metals rotation curves of both galaxies. The subtraction of the stellar plus gas rotation curves from the observed rotation curves of UGC 8490 and UGC 9753 generates the dark matter circular velocity curves of both galaxies. We fit the dark matter rotation curves of UGC 8490 and UGC 9753 through the newly established circular velocity formula specialized to the spherical, prolate, and oblate spheroidal mass distributions, considering the Navarro, Frenk, and White, Burkert, Di Cintio, Einasto, and Stadel dark matter haloes. Our principal findings are the following: globally, cored dark matter profiles Burkert and Einasto prevail over cuspy Navarro, Frenk, and White, and Di Cintio. Also, spherical/oblate dark matter models fit better the dark matter rotation curves of both galaxies than prolate dark matter haloes.

  9. Modelling gamma-ray light curves of phase-aligned millisecond pulsars

    NASA Astrophysics Data System (ADS)

    Chang, Shan; Zhang, Li; Li, Xiang; Jiang, Zejun

    2018-04-01

    Three gamma-ray millisecond pulsars (MSPs), PSR J1939+2134, PSR J1959+2048, and PSR J0034-0534, have been confirmed to have a common feature of phase-aligned in radio and gamma-ray bands. With a geometric (two-pole caustic) model and a physical outer gap model (revised 3D outer gap model) in a three dimensional (3D) retarded magnetic dipole with a perturbation magnetic field, the observed features of these MSPs are studied. In order to obtained the best-fitting model parameters, the Markov chain Monte Carlo technique is used and reasonable GeV band light curves for three MSPs are given. Our calculations indicate that MSPs emit high energy photons with smaller inclination angles (α ≈ 10°-50°), larger view angles (ζ ≈ 65°-100°), and smaller perturbation factor (ɛ ≈ -0.15-0.1). Note that the factor ɛ, describing the strength of the perturbed magnetic field, is all less than zero in these two models, so the magnetic field caused by current-induced play a leading role in the pulsed location of MSPs.

  10. An independent software system for the analysis of dynamic MR images.

    PubMed

    Torheim, G; Lombardi, M; Rinck, P A

    1997-01-01

    A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.

  11. Analysis test of understanding of vectors with the three-parameter logistic model of item response theory and item response curves technique

    NASA Astrophysics Data System (ADS)

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-12-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.

  12. Robust Spatial Approximation of Laser Scanner Point Clouds by Means of Free-form Curve Approaches in Deformation Analysis

    NASA Astrophysics Data System (ADS)

    Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo

    2016-03-01

    In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.

  13. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  14. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  15. Bayesian Analysis of Longitudinal Data Using Growth Curve Models

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.

    2007-01-01

    Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…

  16. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  17. Catmull-Rom Curve Fitting and Interpolation Equations

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2010-01-01

    Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…

  18. Educating about Sustainability while Enhancing Calculus

    ERIC Educational Resources Information Center

    Pfaff, Thomas J.

    2011-01-01

    We give an overview of why it is important to include sustainability in mathematics classes and provide specific examples of how to do this for a calculus class. We illustrate that when students use "Excel" to fit curves to real data, fundamentally important questions about sustainability become calculus questions about those curves. (Contains 5…

  19. On the mass of the compact object in the black hole binary A0620-00

    NASA Technical Reports Server (NTRS)

    Haswell, Carole A.; Robinson, Edward L.; Horne, Keith; Stiening, Rae F.; Abbott, Timothy M. C.

    1993-01-01

    Multicolor orbital light curves of the black hole candidate binary A0620-00 are presented. The light curves exhibit ellipsoidal variations and a grazing eclipse of the mass donor companion star by the accretion disk. Synthetic light curves were generated using realistic mass donor star fluxes and an isothermal blackbody disk. For mass ratios of q = M sub 1/M sub 2 = 5.0, 10.6, and 15.0 systematic searches were executed in parameter space for synthetic light curves that fit the observations. For each mass ratio, acceptable fits were found only for a small range of orbital inclinations. It is argued that the mass ratio is unlikely to exceed q = 10.6, and an upper limit of 0.8 solar masses is placed on the mass of the companion star. These constraints imply 4.16 +/- 0.1 to 5.55 +/- 0.15 solar masses. The lower limit on M sub 1 is more than 4-sigma above the mass of a maximally rotating neutron star, and constitutes further strong evidence in favor of a black hole primary in this system.

  20. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.

Top