NASA Technical Reports Server (NTRS)
Cooper, D. B.; Yalabik, N.
1975-01-01
Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam
2014-01-01
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.
NASA Astrophysics Data System (ADS)
Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik
2017-11-01
To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.
Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M
2013-02-05
This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.
On the convexity of ROC curves estimated from radiological test results.
Pesce, Lorenzo L; Metz, Charles E; Berbaum, Kevin S
2010-08-01
Although an ideal observer's receiver operating characteristic (ROC) curve must be convex-ie, its slope must decrease monotonically-published fits to empirical data often display "hooks." Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This article aims to identify the practical implications of nonconvex ROC curves and the conditions that can lead to empirical or fitted ROC curves that are not convex. This article views nonconvex ROC curves from historical, theoretical, and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve does not cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any nonconvex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. In general, ROC curve fits that show hooks should be looked on with suspicion unless other arguments justify their presence. 2010 AUR. Published by Elsevier Inc. All rights reserved.
On the convexity of ROC curves estimated from radiological test results
Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.
2010-01-01
Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Modeling two strains of disease via aggregate-level infectivity curves.
Romanescu, Razvan; Deardon, Rob
2016-04-01
Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Zhou, Wu
2014-01-01
The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
ERIC Educational Resources Information Center
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
NASA Astrophysics Data System (ADS)
McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.
2017-03-01
Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
NASA Astrophysics Data System (ADS)
Katz, Harley; Lelli, Federico; McGaugh, Stacy S.; Di Cintio, Arianna; Brook, Chris B.; Schombert, James M.
2017-04-01
Cosmological N-body simulations predict dark matter (DM) haloes with steep central cusps (e.g. NFW). This contradicts observations of gas kinematics in low-mass galaxies that imply the existence of shallow DM cores. Baryonic processes such as adiabatic contraction and gas outflows can, in principle, alter the initial DM density profile, yet their relative contributions to the halo transformation remain uncertain. Recent high-resolution, cosmological hydrodynamic simulations by Di Cintio et al. (DC14) predict that inner density profiles depend systematically on the ratio of stellar-to-DM mass (M*/Mhalo). Using a Markov Chain Monte Carlo approach, we test the NFW and the M*/Mhalo-dependent DC14 halo models against a sample of 147 galaxy rotation curves from the new Spitzer Photometry and Accurate Rotation Curves data set. These galaxies all have extended H I rotation curves from radio interferometry as well as accurate stellar-mass-density profiles from near-infrared photometry. The DC14 halo profile provides markedly better fits to the data compared to the NFW profile. Unlike NFW, the DC14 halo parameters found in our rotation-curve fits naturally fall within two standard deviations of the mass-concentration relation predicted by Λ cold dark matter (ΛCDM) and the stellar mass-halo mass relation inferred from abundance matching with few outliers. Halo profiles modified by baryonic processes are therefore more consistent with expectations from ΛCDM cosmology and provide better fits to galaxy rotation curves across a wide range of galaxy properties than do halo models that neglect baryonic physics. Our results offer a solution to the decade long cusp-core discrepancy.
NASA Astrophysics Data System (ADS)
Graur, Or; Zurek, David R.; Rest, Armin; Seitenzahl, Ivo R.; Shappee, Benjamin J.; Fisher, Robert; Guillochon, James; Shara, Michael M.; Riess, Adam G.
2018-06-01
The late-time light curves of Type Ia supernovae (SNe Ia), observed >900 days after explosion, present the possibility of a new diagnostic for SN Ia progenitor and explosion models. First, however, we must discover what physical process (or processes) leads to the slow-down of the light curve relative to a pure 56Co decay, as observed in SNe 2011fe, 2012cg, and 2014J. We present Hubble Space Telescope observations of SN 2015F, taken ≈600–1040 days past maximum light. Unlike those of the three other SNe Ia, the light curve of SN 2015F remains consistent with being powered solely by the radioactive decay of 56Co. We fit the light curves of these four SNe Ia in a consistent manner and measure possible correlations between the light-curve stretch—a proxy for the intrinsic luminosity of the SN—and the parameters of the physical model used in the fit. We propose a new, late-time Phillips-like correlation between the stretch of the SNe and the shape of their late-time light curves, which we parameterize as the difference between their pseudo-bolometric luminosities at 600 and 900 days: ΔL 900 = log(L 600/L 900). Our analysis is based on only four SNe, so a larger sample is required to test the validity of this correlation. If true, this model-independent correlation provides a new way to test which physical process lies behind the slow-down of SN Ia light curves >900 days after explosion, and, ultimately, fresh constraints on the various SN Ia progenitor and explosion models.
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
Evaluation of the swelling behaviour of iota-carrageenan in monolithic matrix tablets.
Kelemen, András; Buchholcz, Gyula; Sovány, Tamás; Pintye-Hódi, Klára
2015-08-10
The swelling properties of monolithic matrix tablets containing iota-carrageenan were studied at different pH values, with measurements of the swelling force and characterization of the profile of the swelling curve. The swelling force meter was linked to a PC by an RS232 cable and the measured data were evaluated with self-developed software. The monitor displayed the swelling force vs. time curve with the important parameters, which could be fitted with an Analysis menu. In the case of iota-carrageenan matrix tablets, it was concluded that the pH and the pressure did not influence the swelling process, and the first section of the swelling curve could be fitted by the Korsmeyer-Peppas equation. Copyright © 2015 Elsevier B.V. All rights reserved.
A microcomputer program for analysis of nucleic acid hybridization data
Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.
1982-01-01
The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017
Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.
Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335
[Comparison among various software for LMS growth curve fitting methods].
Han, Lin; Wu, Wenhong; Wei, Qiuxia
2015-03-01
To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.
The training and learning process of transseptal puncture using a modified technique.
Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom
2013-12-01
As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.
Curve fitting methods for solar radiation data modeling
NASA Astrophysics Data System (ADS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less
Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity
NASA Astrophysics Data System (ADS)
Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md
2017-08-01
This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.
Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio
2016-05-19
Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
1980-12-01
distributions of Figs. 3 and 4 may be fitted quit, accurately by broken straight lines. If we had plotted the differential distributions directly...collection process. These fluctuations are smoothed by replacing the actual differential distribution by the derivative of the fitted broken-line lognormal...for each interval T. The constants in the distribution for each broken section of the lognormal approximations are found by fitting lines to the curve
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
NASA Astrophysics Data System (ADS)
Féry, C.; Racine, B.; Vaufrey, D.; Doyeux, H.; Cinà, S.
2005-11-01
The main process responsible for the luminance degradation in organic light-emitting diodes (OLEDs) driven under constant current has not yet been identified. In this paper, we propose an approach to describe the intrinsic mechanisms involved in the OLED aging. We first show that a stretched exponential decay can be used to fit almost all the luminance versus time curves obtained under different driving conditions. In this way, we are able to prove that they can all be described by employing a single free parameter model. By using an approach based on local relaxation events, we will demonstrate that a single mechanism is responsible for the dominant aging process. Furthermore, we will demonstrate that the main relaxation event is the annihilation of one emissive center. We then use our model to fit all the experimental data measured under different driving condition, and show that by carefully fitting the accelerated luminance lifetime-curves, we can extrapolate the low-luminance lifetime needed for real display applications, with a high degree of accuracy.
The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.
van Battum, L J; Huizenga, H
2006-07-01
Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.
Testing Modified Newtonian Dynamics with Low Surface Brightness Galaxies: Rotation Curve FITS
NASA Astrophysics Data System (ADS)
de Blok, W. J. G.; McGaugh, S. S.
1998-11-01
We present modified Newtonian dynamics (MOND) fits to 15 rotation curves of low surface brightness (LSB) galaxies. Good fits are readily found, although for a few galaxies minor adjustments to the inclination are needed. Reasonable values for the stellar mass-to-light ratios are found, as well as an approximately constant value for the total (gas and stars) mass-to-light ratio. We show that the LSB galaxies investigated here lie on the one, unique Tully-Fisher relation, as predicted by MOND. The scatter on the Tully-Fisher relation can be completely explained by the observed scatter in the total mass-to-light ratio. We address the question of whether MOND can fit any arbitrary rotation curve by constructing a plausible fake model galaxy. While MOND is unable to fit this hypothetical galaxy, a normal dark-halo fit is readily found, showing that dark matter fits are much less selective in producing fits. The good fits to rotation curves of LSB galaxies support MOND, especially because these are galaxies with large mass discrepancies deep in the MOND regime.
Dynamic Analysis of Recalescence Process and Interface Growth of Eutectic Fe82B17Si1 Alloy
NASA Astrophysics Data System (ADS)
Fan, Y.; Liu, A. M.; Chen, Z.; Li, P. Z.; Zhang, C. H.
2018-03-01
By employing the glass fluxing technique in combination with cyclical superheating, the microstructural evolution of the undercooled Fe82B17Si1 alloy in the obtained undercooling range was studied. With increase in undercooling, a transition of cooling curves was detected from one recalescence to two recalescences, followed by one recalescence. The two types of cooling curves were fitted by the break equation and the Johnson-Mehl-Avrami-Kolmogorov model. Based on the cooling curves at different undercoolings, the recalescence rate was calculated by the multi-logistic growth model and the Boettinger-Coriel-Trivedi model. Both the recalescence features and the interface growth kinetics of the eutectic Fe82B17Si1 alloy were explored. The fitting results that were obtained using TEM (SAED), SEM and XRD were consistent with the changing rule of microstructures. Finally, the relationship between the microstructure and hardness was also investigated.
Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.; Taylor, Aaron B.
2009-01-01
Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…
Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.
Bandos, Andriy I; Guo, Ben; Gur, David
2017-02-01
The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply unreliable AUC-based inferences. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya
2018-03-01
A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Chang, Cheng-Nan; Cheng, Hong-Bang; Chao, Allen C
2004-03-15
In this paper, various forms of Nernst equations have been developed based on the real stoichiometric relationship of biological nitrification and denitrification reactions. Instead of using the Nernst equation based on a one-to-one stoichiometric relation for the oxidizing and the reducing species, the basic Nernst equation is modified into slightly different forms. Each is suitable for simulating the redox potential (ORP) variation of a specific biological nitrification or denitrification process. Using the data published in the literature, the validity of these developed Nernst equations has been verified by close fits of the measured ORP data with the calculated ORP curve. The simulation results also indicate that if the biological process is simulated using an incorrect form of Nernst equation, the calculated ORP curve will not fit the measured data. Using these Nernst equations, the ORP value that corresponds to a predetermined degree of completion for the biochemical reaction can be calculated. Thus, these Nernst equations will enable a more efficient on-line control of the biological process.
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.
O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A
2015-02-01
Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yao, X. F.; Xiong, T. C.; Xu, H. M.; Wan, J. P.; Long, G. R.
2008-11-01
The residual stresses of the PMMA (polymethyl methacrylate) specimens after being drilled, reamed and polished respectively are investigated using the digital speckle correlation experimental method,. According to the displacement fields around the correlated calculated region, the polynomial curve fitting method is used to obtain the continuous displacement fields, and the strain fields can be obtained from the derivative of the displacement fields. Considering the constitutive equation of the material, the expression of the residual stress can be presented. During the data processing, according to the fitting effect of the data, the calculation region of the correlated speckles and the degree of the polynomial fitting curve is decided. These results show that the maximum stress is at the hole-wall of the drilling hole specimen and with the increasing of the diameter of the drilled hole, the residual stress resulting from the hole drilling increases, whereas the process of reaming and polishing hole can reduce the residual stress. The relative large discrete degree of the residual stress is due to the chip removal ability of the drill bit, the cutting feed of the drill and other various reasons.
Toward a Micro-Scale Acoustic Direction-Finding Sensor with Integrated Electronic Readout
2013-06-01
measurements with curve fits . . . . . . . . . . . . . . . 20 Figure 2.10 Failure testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22...2.1 Sensor parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Table 2.2 Curve fit parameters...elastic, the quantity of interest is the elastic stiffness. In a typical nanoindentation test, the loading curve is nonlinear due to combined plastic
Transonic Compressor: Program System TXCO for Data Acquisition and On-Line Reduction.
1980-10-01
IMONIDAYIYEARIHOUR,IMINISEC) OS16 C ............................................................... (0S17 C 0SiB C Gel dole ond line and convert the...linear curve fits SECON real intercept of linear curve fit (as from CURVE) 65 - . FLOW CHART SUBROUTINE CALIB - - - Aso C’A / oonre& *Go wSAt*irc
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Edge detection and mathematic fitting for corneal surface with Matlab software.
Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na
2017-01-01
To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.
Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method
NASA Astrophysics Data System (ADS)
Verachtert, R.; Lombaert, G.; Degrande, G.
2018-03-01
This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.
Nonlinear Growth Curves in Developmental Research
Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki
2011-01-01
Developmentalists are often interested in understanding change processes and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and asymptotic levels can be estimated. A variety of growth models are described beginning with the linear growth model and moving to nonlinear models of varying complexity. A detailed discussion of nonlinear models is provided, highlighting the added insights into complex developmental processes associated with their use. A collection of growth models are fit to repeated measures of height from participants of the Berkeley Growth and Guidance Studies from early childhood through adulthood. PMID:21824131
Constraining the inclination of the Low-Mass X-ray Binary Cen X-4
NASA Astrophysics Data System (ADS)
Hammerstein, Erica K.; Cackett, Edward M.; Reynolds, Mark T.; Miller, Jon M.
2018-05-01
We present the results of ellipsoidal light curve modeling of the low mass X-ray binary Cen X-4 in order to constrain the inclination of the system and mass of the neutron star. Near-IR photometric monitoring was performed in May 2008 over a period of three nights at Magellan using PANIC. We obtain J, H and K lightcurves of Cen X-4 using differential photometry. An ellipsoidal modeling code was used to fit the phase folded light curves. The lightcurve fit which makes the least assumptions about the properties of the binary system yields an inclination of 34.9^{+4.9}_{-3.6} degrees (1σ), which is consistent with previous determinations of the system's inclination but with improved statistical uncertainties. When combined with the mass function and mass ratio, this inclination yields a neutron star mass of 1.51^{+0.40}_{-0.55} M⊙. This model allows accretion disk parameters to be free in the fitting process. Fits that do not allow for an accretion disk component in the near-IR flux gives a systematically lower inclination between approximately 33 and 34 degrees, leading to a higher mass neutron star between approximately 1.7 M⊙ and 1.8 M⊙. We discuss the implications of other assumptions made during the modeling process as well as numerous free parameters and their effects on the resulting inclination.
Materials and Modulators for 3D Displays
2002-08-01
1243 nm. 0, 180 and 360 deg. in this plot correspond to parallel polarization. The dashed curve is a cos2(θ) fit to the data with a constant value...dwell time (solid bold curve ), 10 µs dwell time (dashed bold curve ) and static case (thin dashed curve ). 26 Figure. 20. Schematics of free-space...photon. The two peaks in the two photon spectrum can be fit by two Lorentzian curves . These spectra indicate that in the rhodamine B molecule the
NASA Astrophysics Data System (ADS)
Amarnath, N. S.; Pound, M. W.; Wolfire, M. G.
The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 4 years. DIRT uses results from a number of numerical models of astrophysical processes, and has an AWT based user interface. DIRT has been refactored to decouple data representation from plotting and curve fitting. This makes it easier to add new kinds of astrophysical models, use the plotter in other applications, migrate the user interface to Swing components, and modify the user interface to add functionality (for example, SIRTF tools). DIRT is now an extension of two generic libraries, one of which manages data representation and caching, and the second of which manages plotting and curve fitting. This project is an example of refactoring with no impact on user interface, so the existing user community was not affected.
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe givesmore » a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.« less
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary
2012-01-01
Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021
2017-11-01
sent from light-emitting diodes (LEDs) of 5 colors ( green , red, white, amber, and blue). Experiment 1 involved controlled laboratory measurements of...A-4 Red LED calibration curves and quadratic curve fits with R2 values . 37 Fig. A-5 Green LED calibration curves and quadratic curve fits with R2...36 Table A-4 Red LED calibration measurements ................................................... 36 Table A-5 Green LED
Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece
Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.
2008-10-14
Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
Automated Estimation of the Orbital Parameters of Jupiter's Moons
NASA Astrophysics Data System (ADS)
Western, Emma; Ruch, Gerald T.
2016-01-01
Every semester the Physics Department at the University of St. Thomas has the Physics 104 class complete a Jupiter lab. This involves taking around twenty images of Jupiter and its moons with the telescope at the University of St. Thomas Observatory over the course of a few nights. The students then take each image and find the distance from each moon to Jupiter and plot the distances versus the elapsed time for the corresponding image. Students use the plot to fit four sinusoidal curves of the moons of Jupiter. I created a script that automates this process for the professor. It takes the list of images and creates a region file used by the students to measure the distance from the moons to Jupiter, a png image that is the graph of all the data points and the fitted curves of the four moons, and a csv file that contains the list of images, the date and time each image was taken, the elapsed time since the first image, and the distances to Jupiter for Io, Europa, Ganymede, and Callisto. This is important because it lets the professor spend more time working with the students and answering questions as opposed to spending time fitting the curves of the moons on the graph, which can be time consuming.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Wei, Qiuning; Wei, Yuan; Liu, Fangfang; Ding, Yalei
2015-10-01
To investigate the method for uncertainty evaluation of determination of tin and its compounds in the air of workplace by flame atomic absorption spectrometry. The national occupational health standards, GBZ/T160.28-2004 and JJF1059-1999, were used to build a mathematical model of determination of tin and its compounds in the air of workplace and to calculate the components of uncertainty. In determination of tin and its compounds in the air of workplace using flame atomic absorption spectrometry, the uncertainty for the concentration of the standard solution, atomic absorption spectrophotometer, sample digestion, parallel determination, least square fitting of the calibration curve, and sample collection was 0.436%, 0.13%, 1.07%, 1.65%, 3.05%, and 2.89%, respectively. The combined uncertainty was 9.3%.The concentration of tin in the test sample was 0.132 mg/m³, and the expanded uncertainty for the measurement was 0.012 mg/m³ (K=2). The dominant uncertainty for determination of tin and its compounds in the air of workplace comes from least squares fitting of the calibration curve and sample collection. Quality control should be improved in the process of calibration curve fitting and sample collection.
NASA Astrophysics Data System (ADS)
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less
Direct Simulation of Magnetic Resonance Relaxation Rates and Line Shapes from Molecular Trajectories
Rangel, David P.; Baveye, Philippe C.; Robinson, Bruce H.
2012-01-01
We simulate spin relaxation processes, which may be measured by either continuous wave or pulsed magnetic resonance techniques, using trajectory-based simulation methodologies. The spin–lattice relaxation rates are extracted numerically from the relaxation simulations. The rates obtained from the numerical fitting of the relaxation curves are compared to those obtained by direct simulation from the relaxation Bloch–Wangsness–Abragam– Redfield theory (BWART). We have restricted our study to anisotropic rigid-body rotational processes, and to the chemical shift anisotropy (CSA) and a single spin–spin dipolar (END) coupling mechanisms. Examples using electron paramagnetic resonance (EPR) nitroxide and nuclear magnetic resonance (NMR) deuterium quadrupolar systems are provided. The objective is to compare those rates obtained by numerical simulations with the rates obtained by BWART. There is excellent agreement between the simulated and BWART rates for a Hamiltonian describing a single spin (an electron) interacting with the bath through the chemical shift anisotropy (CSA) mechanism undergoing anisotropic rotational diffusion. In contrast, when the Hamiltonian contains both the chemical shift anisotropy (CSA) and the spin–spin dipolar (END) mechanisms, the decay rate of a single exponential fit of the simulated spin–lattice relaxation rate is up to a factor of 0.2 smaller than that predicted by BWART. When the relaxation curves are fit to a double exponential, the slow and fast rates extracted from the decay curves bound the BWART prediction. An extended BWART theory, in the literature, includes the need for multiple relaxation rates and indicates that the multiexponential decay is due to the combined effects of direct and cross-relaxation mechanisms. PMID:22540276
Pope, Noah G.; Veirs, Douglas K.; Claytor, Thomas N.
1994-01-01
The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure.
Pope, N.G.; Veirs, D.K.; Claytor, T.N.
1994-10-25
The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast Fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure. 7 figs.
Biological growth functions describe published site index curves for Lake States timber species.
Allen L. Lundgren; William A. Dolid
1970-01-01
Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...
Computer Programs in Marine Science: Key to Oceanographic Records Documentation No. 5.
ERIC Educational Resources Information Center
Firestone, Mary A.
Presented are abstracts of 700 computer programs in marine science. The programs listed are categorized under a wide range of headings which include physical oceanography, chemistry, coastal and estuarine processes, biology, pollution, air-sea interaction and heat budget, navigation and charting, curve fitting, and applied mathematics. The…
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Magalhaes, A. M.; Coyne, G. V.
1995-01-01
We study the dust in the Small Magellanic Cloud using our polarization and extinction data (Paper 1) and existing dust models. The data suggest that the monotonic SMC extinction curve is related to values of lambda(sub max), the wavelength of maximum polarization, which are on the average smaller than the mean for the Galaxy. On the other hand, AZV 456, a star with an extinction similar to that for the Galaxy, shows a value of lambda(sub max) similar to the mean for the Galaxy. We discuss simultaneous dust model fits to extinction and polarization. Fits to the wavelength dependent polarization data are possible for stars with small lambda(sub max). In general, they imply dust size distributions which are narrower and have smaller mean sizes compared to typical size distributions for the Galaxy. However, stars with lambda(sub max) close to the Galactic norm, which also have a narrower polarization curve, cannot be fit adequately. This holds true for all of the dust models considered. The best fits to the extinction curves are obtained with a power law size distribution by assuming that the cylindrical and spherical silicate grains have a volume distribution which is continuous from the smaller spheres to the larger cylinders. The size distribution for the cylinders is taken from the fit to the polarization. The 'typical', monotonic SMC extinction curve can be fit well with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grain. However, amorphous carbon and silicate grains also fit the data well. AZV456, which has an extinction curve similar to that for the Galaxy, has a UV bump which is too blue to be fit by spherical graphite grains.
NASA Astrophysics Data System (ADS)
Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu
2018-03-01
In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.
Nishith, Pallavi; Resick, Patricia A.; Griffin, Michael G.
2010-01-01
Curve estimation techniques were used to identify the pattern of therapeutic change in female rape victims with posttraumatic stress disorder (PTSD). Within-session data on the Posttraumatic Stress Disorder Symptom Scale were obtained, in alternate therapy sessions, on 171 women. The final sample of treatment completers included 54 prolonged exposure (PE) and 54 cognitive-processing therapy (CPT) completers. For both PE and CPT, a quadratic function provided the best fit for the total PTSD, reexperiencing, and arousal scores. However, a difference in the line of best fit was observed for the avoidance symptoms. Although a quadratic function still provided a better fit for the PE avoidance, a linear function was more parsimonious in explaining the CPT avoidance variance. Implications of the findings are discussed. PMID:12182271
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
The effect of dimethylsulfoxide on the water transport response of rat hepatocytes during freezing.
Smith, D J; Schulte, M; Bischof, J C
1998-10-01
Successful improvement of cryopreservation protocols for cells in suspension requires knowledge of how such cells respond to the biophysical stresses of freezing (intracellular ice formation, water transport) while in the presence of a cryoprotective agent (CPA). This work investigates the biophysical water transport response in a clinically important cell type--isolated hepatocytes--during freezing in the presence of dimethylsulfoxide (DMSO). Sprague-Dawley rat liver hepatocytes were frozen in Williams E media supplemented with 0, 1, and 2 M DMSO, at rates of 5, 10, and 50 degrees C/min. The water transport was measured by cell volumetric changes as assessed by cryomicroscopy and image analysis. Assuming that water is the only species transported under these conditions, a water transport model of the form dV/dT = f(Lpg([CPA]), ELp([CPA]), T(t)) was curve-fit to the experimental data to obtain the biophysical parameters of water transport--the reference hydraulic permeability (Lpg) and activation energy of water transport (ELp)--for each DMSO concentration. These parameters were estimated two ways: (1) by curve-fitting the model to the average volume of the pooled cell data, and (2) by curve-fitting individual cell volume data and averaging the resulting parameters. The experimental data showed that less dehydration occurs during freezing at a given rate in the presence of DMSO at temperatures between 0 and -10 degrees C. However, dehydration was able to continue at lower temperatures (< -10 degrees C) in the presence of DMSO. The values of Lpg and ELp obtained using the individual cell volume data both decreased from their non-CPA values--4.33 x 10(-13) m3/N-s (2.69 microns/min-atm) and 317 kJ/mol (75.9 kcal/mol), respectively--to 0.873 x 10(-13) m3/N-s (0.542 micron/min-atm) and 137 kJ/mol (32.8 kcal/mol), respectively, in 1 M DMSO and 0.715 x 10(-13) m3/N-s (0.444 micron/min-atm) and 107 kJ/mol (25.7 kcal/mol), respectively, in 2 M DMSO. The trends in the pooled volume values for Lpg and ELp were very similar, but the overall fit was considered worse than for the individual volume parameters. A unique way of presenting the curve-fitting results supports a clear trend of reduction of both biophysical parameters in the presence of DMSO, and no clear trend in cooling rate dependence of the biophysical parameters. In addition, these results suggest that close proximity of the experimental cell volume data to the equilibrium volume curve may significantly reduce the efficiency of the curve-fitting process.
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.
The potential of artificial aging for modelling of natural aging processes of ballpoint ink.
Weyermann, Céline; Spengler, Bernhard
2008-08-25
Artificial aging has been used to reproduce natural aging processes in an accelerated pace. Questioned documents were exposed to light or high temperature in a well-defined manner in order to simulate an increased age. This may be used to study the aging processes or to date documents by reproducing their aging curve. Ink was studied especially because it is deposited on the paper when a document, such as a contract, is produced. Once on the paper, aging processes start through degradation of dyes, solvents drying and resins polymerisation. Modelling of dye's and solvent's aging was attempted. These processes, however, follow complex pathways, influenced by many factors which can be classified as three major groups: ink composition, paper type and storage conditions. The influence of these factors is such that different aging states can be obtained for an identical point in time. Storage conditions in particular are difficult to simulate, as they are dependent on environmental conditions (e.g. intensity and dose of light, temperature, air flow, humidity) and cannot be controlled in the natural aging of questioned documents. The problem therefore lies more in the variety of different conditions a questioned document might be exposed to during its natural aging, rather than in the simulation of such conditions in the laboratory. Nevertheless, a precise modelling of natural aging curves based on artificial aging curves is obtained when performed on the same paper and ink. A standard model for aging processes of ink on paper is therefore presented that is based on a fit of aging curves to a power law of solvent concentrations as a function of time. A mathematical transformation of artificial aging curves into modelled natural aging curves results in excellent overlap with data from real natural aging processes.
On the reduction of occultation light curves. [stellar occultations by planets
NASA Technical Reports Server (NTRS)
Wasserman, L.; Veverka, J.
1973-01-01
The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.
NASA Astrophysics Data System (ADS)
Zahir, N.; Ali, A.
2015-12-01
The Lake Urmiah has undergone a drastic shrinkage in size over the past few decades. The initial intention of this paper is to present an approach for determining the so called "salient times" during which the trend of the shrinkage process is accelerated or decelerated. To find these salient times, a quasi_continuous curve was optimally fitted to the Topex altimetry data within the period 1998 to 2006. To find the salient points within this period of time, the points of inflections of the fitted curve is computed using a second derivative approach. The water volume was also computed using 16 cloud free Landsat images of the Lake within the periods of 1998 to 2006. In the first stage of the water volume calculation, the pixels of the Lake were segmented using the Automated Water Extraction Index (AWEI) and the shorelines of the Lake were extracted by a boundary detecting operator using the generated binary image of the Lake surface. The water volume fluctuation rate was then computed under the assumption that the two successive Lake surfaces and their corresponding water level differences demonstrate approximately a truncated pyramid. The analysis of the water level fluctuation rates were further extended by a sinusoidal curve fitted to the Topex altimetry data. This curve was intended to model the seasonal fluctuations of the water level. In the final stage of this article, the correlation between the fluctuation rates and the precipitation and temperature variations were also numerically determined. This paper reports in some details the stages mentioned above.
An accurate surface topography restoration algorithm for white light interferometry
NASA Astrophysics Data System (ADS)
Yuan, He; Zhang, Xiangchao; Xu, Min
2017-10-01
As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.
An Apparatus for Sizing Particulate Matter in Solid Rocket Motors.
1984-06-01
accurately measured. A curve for sizing polydispersions was presented which was used by Cramer and Hansen [Refs. 2, 12]. Two phase flow losses are often...Concentration...... 54 18. 5 Micron Polystyrene, Curve Fit .......... ... 55 19. 5 Micron Polystyrene, Two Angle Method ........ .56.... 20. 10 Micron...Polystyrene, Curve Fit .. ........ 57....[57 21. 10 Micron Polystyrene, Two Angle Method .. ....... .58 . . .6_ *22. 20J Mizron P3iystvrene Cu. .Fi
Space-Based Observation Technology
2000-10-01
Conan, V. Michau, and S. Salem . Regularized multiframe myopic deconvolution from wavefront sensing. In Propagation through the Atmosphere III...specified false alarm rate PFA . Proceeding with curving fitting, one obtains a best-fit curve “10.1y14.2 - 0.2” as the detector for the target
Revisiting the Estimation of Dinosaur Growth Rates
Myhrvold, Nathan P.
2013-01-01
Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133
Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C
2017-08-01
Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over time.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Białek, Marianna
2015-05-01
Physiotherapy for stabilization of idiopathic scoliosis angle in growing children remains controversial. Notably, little data on effectiveness of physiotherapy in children with Early Onset Idiopathic Scoliosis (EOIS) has been published.The aim of this study was to check results of FITS physiotherapy in a group of children with EOIS.The charts of the patients archived in a prospectively collected database were retrospectively reviewed. The inclusion criteria were:diagnosis of EOIS based on spine radiography, age below 10 years, both girls and boys, Cobb angle between 118 and 308, Risser zero, FITS therapy, no other treatment (bracing), and a follow-up at least 2 years from the initiation of the treatment. The criterion for curve progression were as follows: the Cobb angle increase of 68 or more, for curve stabilization; the Cobb angle was 58 comparing to the initial radiograph,for curve correction; and the Cobb angle decrease of 68 or more at the final follow-up radiograph.There were 41 children with EOIS, 36 girls and 5 boys, mean age 7.71.3 years (range 4 to 9 years) who started FITS therapy. The curve pattern was single thoracic (5 children), single thoracolumbar (22 children) or double thoracic/thoracolumbar (14 children), totally 55 structural curvatures. The minimum follow-up was 2 years after initiation of the FITS treatment, maximum was 16 years, mean 4.8 years). At follow-up the mean age was 12.53.4 years. Out of 41 children, 10 passed pubertal growth spurt at the final follow-up and 31 were still immature and continued FITS therapy. Out of 41 children, 27 improved, 13 were stable, and one progressed. Out of 55 structural curves, 32 improved, 22 were stable and one progressed. For the 55 structural curves, the Cobb angle significantly decreased from 18.085.48 at first assessment to 12.586.38 at last evaluation,p<0.0001, paired t-test. The angle of trunk rotation decreased significantly from 4.782.98 to 3.282.58 at last evaluation, p<0.0001,paired t-test.FITS physiotherapy was effective in preventing curve progression in children with EOIS. Final postpubertal follow-up data is needed.
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
A Global Optimization Method to Calculate Water Retention Curves
NASA Astrophysics Data System (ADS)
Maggi, S.; Caputo, M. C.; Turturro, A. C.
2013-12-01
Water retention curves (WRC) have a key role for the hydraulic characterization of soils and rocks. The behaviour of the medium is defined by relating the unsaturated water content to the matric potential. The experimental determination of WRCs requires an accurate and detailed measurement of the dependence of matric potential on water content, a time-consuming and error-prone process, in particular for rocky media. A complete experimental WRC needs at least a few tens of data points, distributed more or less uniformly from full saturation to oven dryness. Since each measurement requires to wait to reach steady state conditions (i.e., between a few tens of minutes for soils and up to several hours or days for rocks or clays), the whole process can even take a few months. The experimental data are fitted to the most appropriate parametric model, such as the widely used models of Van Genuchten, Brooks and Corey and Rossi-Nimmo, to obtain the analytic WRC. We present here a new method for the determination of the parameters that best fit the models to the available experimental data. The method is based on differential evolution, an evolutionary computation algorithm particularly useful for multidimensional real-valued global optimization problems. With this method it is possible to strongly reduce the number of measurements necessary to optimize the model parameters that accurately describe the WRC of the samples, allowing to decrease the time needed to adequately characterize the medium. In the present work, we have applied our method to calculate the WRCs of sedimentary carbonatic rocks of marine origin, belonging to 'Calcarenite di Gravina' Formation (Middle Pliocene - Early Pleistocene) and coming from two different quarry districts in Southern Italy. WRC curves calculated using the Van Genuchten model by simulated annealing (dashed curve) and differential evolution (solid curve). The curves are calculated using 10 experimental data points randomly extracted from the full experimental dataset. Simulated annealing is not able to find the optimal solution with this reduced data set.
Determination of heat transfer coefficients in plastic French straws plunged in liquid nitrogen.
Santos, M Victoria; Sansinena, M; Chirife, J; Zaritzky, N
2014-12-01
The knowledge of the thermodynamic process during the cooling of reproductive biological systems is important to assess and optimize the cryopreservation procedures. The time-temperature curve of a sample immersed in liquid nitrogen enables the calculation of cooling rates and helps to determine whether it is vitrified or undergoes phase change transition. When dealing with cryogenic liquids, the temperature difference between the solid and the sample is high enough to cause boiling of the liquid, and the sample can undergo different regimes such as film and/or nucleate pool boiling. In the present work, the surface heat transfer coefficients (h) for plastic French straws plunged in liquid nitrogen were determined using the measurement of time-temperature curves. When straws filled with ice were used the cooling curve showed an abrupt slope change which was attributed to the transition of film into nucleate pool boiling regime. The h value that fitted each stage of the cooling process was calculated using a numerical finite element program that solves the heat transfer partial differential equation under transient conditions. In the cooling process corresponding to film boiling regime, the h that best fitted experimental results was h=148.12±5.4 W/m(2) K and for nucleate-boiling h=1355±51 W/m(2) K. These values were further validated by predicting the time-temperature curve for French straws filled with a biological fluid system (bovine semen-extender) which undergoes freezing. Good agreement was obtained between the experimental and predicted temperature profiles, further confirming the accuracy of the h values previously determined for the ice-filled straw. These coefficients were corroborated using literature correlations. The determination of the boiling regimes that govern the cooling process when plunging straws in liquid nitrogen constitutes an important issue when trying to optimize cryopreservation procedures. Furthermore, this information can lead to improvements in the design of cooling devices in the cryobiology field. Copyright © 2014 Elsevier Inc. All rights reserved.
Transport of sulfadiazine in soil columns — Experiments and modelling approaches
NASA Astrophysics Data System (ADS)
Wehrhan, Anne; Kasteel, Roy; Simunek, Jirka; Groeneweg, Joost; Vereecken, Harry
2007-01-01
Antibiotics, such as sulfadiazine, reach agricultural soils directly through manure of grazing livestock or indirectly through the spreading of manure or sewage sludge on the field. Knowledge about the fate of antibiotics in soils is crucial for assessing the environmental risk of these compounds, including possible transport to the groundwater. Transport of 14C-labelled sulfadiazine was investigated in disturbed soil columns at a constant flow rate of 0.26 cm h - 1 near saturation. Sulfadiazine was applied in different concentrations for either a short or a long pulse duration. Breakthrough curves of sulfadiazine and the non-reactive tracer chloride were measured. At the end of the leaching period the soil concentration profiles were determined. The peak maxima of the breakthrough curves were delayed by a factor of 2 to 5 compared to chloride and the decreasing limbs are characterized by an extended tailing. However, the maximum relative concentrations differed as well as the eluted mass fractions, ranging from 18 to 83% after 500 h of leaching. To identify relevant sorption processes, breakthrough curves of sulfadiazine were fitted with a convective-dispersive transport model, considering different sorption concepts with one, two and three sorption sites. Breakthrough curves can be fitted best with a three-site sorption model, which includes two reversible kinetic and one irreversible sorption site. However, the simulated soil concentration profiles did not match the observations for all of the used models. Despite this incomplete process description, the obtained results have implications for the transport behavior of sulfadiazine in the field. Its leaching may be enhanced if it is frequently applied at higher concentrations.
Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan
2015-01-01
Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997
Zhang, Gang-Chun; Lin, Hong-Liang; Lin, Shan-Yang
2012-07-01
The cocrystal formation of indomethacin (IMC) and saccharin (SAC) by mechanical cogrinding or thermal treatment was investigated. The formation mechanism and stability of IMC-SAC cocrystal prepared by cogrinding process were explored. Typical IMC-SAC cocrystal was also prepared by solvent evaporation method. All the samples were identified and characterized by using differential scanning calorimetry (DSC) and Fourier transform infrared (FTIR) microspectroscopy with curve-fitting analysis. The physical stability of different IMC-SAC ground mixtures before and after storage for 7 months was examined. The results demonstrate that the stepwise measurements were carried out at specific intervals over a continuous cogrinding process showing a continuous growth in the cocrystal formation between IMC and SAC. The main IR spectral shifts from 3371 to 3,347 cm(-1) and 1693 to 1682 cm(-1) for IMC, as well as from 3094 to 3136 cm(-1) and 1718 to 1735 cm(-1) for SAC suggested that the OH and NH groups in both chemical structures were taken part in a hydrogen bonding, leading to the formation of IMC-SAC cocrystal. A melting at 184 °C for the 30-min IMC-SAC ground mixture was almost the same as the melting at 184 °C for the solvent-evaporated IMC-SAC cocrystal. The 30-min IMC-SAC ground mixture was also confirmed to have similar components and contents to that of the solvent-evaporated IMC-SAC cocrystal by using a curve-fitting analysis from IR spectra. The thermal-induced IMC-SAC cocrystal formation was also found to be dependent on the temperature treated. Different IMC-SAC ground mixtures after storage at 25 °C/40% RH condition for 7 months had an improved tendency of IMC-SAC cocrystallization. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378
Dust in the Small Magellanic Cloud
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Coyne, G. V.; Magalhaes, A. M.
1995-01-01
We discuss simultaneous dust model fits to our extinction and polarization data for the Small Magellanic Cloud (SMC) using existing dust models. Dust model fits to the wavelength dependent polarization are possible for stars with small lambda(sub max). They generally imply size distributions which are narrower and have smaller average sizes compared to those in the Galaxy. The best fits for the extinction curves are obtained with a power law size distribution. The typical, monotonic SMC extinction curve can be well fit with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grains. Amorphous carbon and silicate grains also fit the data well.
Diagnostic efficiency of an ability-focused battery.
Miller, Justin B; Fichtenberg, Norman L; Millis, Scott R
2010-05-01
An ability-focused battery (AFB) is a selected group of well-validated neuropsychological measures that assess the conventional range of cognitive domains. This study examined the diagnostic efficiency of an AFB for use in clinical decision making with a mixed sample composed of individuals with neurological brain dysfunction and individuals referred for cognitive assessment without evidence of neurological disorders. Using logistic regression analyses and ROC curve analysis, a five-domain model composed of attention, processing speed, visual-spatial reasoning, language/verbal reasoning, and memory domain scores was fitted that had an AUC of.89 (95% CI =.84-.95). A more parsimonious two-domain model using processing speed and memory was also fitted that had an AUC of.90 (95% confidence interval =.84-.95). A model composed of a global ability score calculated from the mean of the individual domain scores was also fitted with an AUC of.88 (95% CI =.82-.94).
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
Milky Way Kinematics. II. A Uniform Inner Galaxy H I Terminal Velocity Curve
NASA Astrophysics Data System (ADS)
McClure-Griffiths, N. M.; Dickey, John M.
2016-11-01
Using atomic hydrogen (H I) data from the VLA Galactic Plane Survey, we measure the H I terminal velocity as a function of longitude for the first quadrant of the Milky Way. We use these data, together with our previous work on the fourth Galactic quadrant, to produce a densely sampled, uniformly measured, rotation curve of the northern and southern Milky Way between 3 {kpc}\\lt R\\lt 8 {kpc}. We determine a new joint rotation curve fit for the first and fourth quadrants, which is consistent with the fit we published in McClure-Griffiths & Dickey and can be used for estimating kinematic distances interior to the solar circle. Structure in the rotation curves is now exquisitely well defined, showing significant velocity structure on lengths of ˜200 pc, which is much greater than the spatial resolution of the rotation curve. Furthermore, the shape of the rotation curves for the first and fourth quadrants, even after subtraction of a circular rotation fit shows a surprising degree of correlation with a roughly sinusoidal pattern between 4.2\\lt R\\lt 7 kpc.
ERIC Educational Resources Information Center
Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-05-01
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
Modelling Schumann resonances from ELF measurements using non-linear optimization methods
NASA Astrophysics Data System (ADS)
Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo
2017-04-01
Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.
The Chaotic Light Curves of Accreting Black Holes
NASA Technical Reports Server (NTRS)
Kazanas, Demosthenes
2007-01-01
We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.
Volunteerism and socioemotional selectivity in later life.
Hendricks, Jon; Cutler, Stephen J
2004-09-01
The goal of this work was to assess the applicability of socioemotional selectivity theory to the realm of volunteerism by analyzing data drawn from the September 2002 Current Population Survey Volunteer Supplement. Total number of organizations volunteered for and total number of hours engaged in volunteer activities were utilized to obtain measures of volunteer hours per organization and volunteer hours in the main organization to determine whether a selective process could be observed. Descriptive statistics on age patterns were followed by a series of curve estimations to identify the best-fitting curves. Logistic age patterns of slowly increasing then relatively stable volunteer activity suggest that socioemotional selectivity processes are operative in the realm of voluntary activities. Socioemotional selectivity theory is applicable to voluntary activities.
Díaz Alonso, Fernando; González Ferradás, Enrique; Sánchez Pérez, Juan Francisco; Miñana Aznar, Agustín; Ruiz Gimeno, José; Martínez Alonso, Jesús
2006-09-21
A number of models have been proposed to calculate overpressure and impulse from accidental industrial explosions. When the blast is produced by ignition of a vapour cloud, the TNO Multi-Energy model is widely used. From the curves given by this model, data are fitted to obtain equations showing the relationship between overpressure, impulse and distance. These equations, referred herein as characteristic curves, can be fitted by means of power equations, which depend on explosion energy and charge strength. Characteristic curves allow the determination of overpressure and impulse at each distance.
Behavior and sensitivity of an optimal tree diameter growth model under data uncertainty
Don C. Bragg
2005-01-01
Using loblolly pine, shortleaf pine, white oak, and northern red oak as examples, this paper considers the behavior of potential relative increment (PRI) models of optimal tree diameter growth under data uncertainity. Recommendations on intial sample size and the PRI iteractive curve fitting process are provided. Combining different state inventories prior to PRI model...
Rapid Inversion of Angular Deflection Data for Certain Axisymmetric Refractive Index Distributions
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Greenberg, P. S.
1994-01-01
Certain functions useful for representing axisymmetric refractive-index distributions are shown to have exact solutions for Abel transformation of the resulting angular deflection data. An advantage of this procedure over direct numerical Abel inversion is that least-squares curve fitting is a smoothing process that reduces the noise sensitivity of the computation
System and process for ultrasonic characterization of deformed structures
Panetta, Paul D [Williamsburg, VA; Morra, Marino [Richland, WA; Johnson, Kenneth I [Richland, WA
2011-11-22
Generally speaking, the method of the present invention is performed by making various ultrasonic scans at preselected orientations along the length of a material being tested. Data from the scans are then plotted together with various calculated parameters that are calculated from this data. Lines or curves are then fitted to the respective plotted points. Review of these plotted curves allows the location and severity of defects within these sections to be determined and quantified. With this information various other decisions related to how, when or whether repair or replacement of a particular portion of a structure can be made.
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
Myocardial serotonin exchange: negligible uptake by capillary endothelium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moffett, T.C.; Chan, I.S.; Bassingthwaighte, J.B.
1988-03-01
The extraction of serotonin from the blood during transorgan passage through the heart was studied using Langendorff-perfused rabbit hearts. Outflow dilution curves of /sup 131/I- or /sup 125/I-labeled albumin, (/sup 14/C)sucrose, and (3H)serotonin injected simultaneously into the inflow were fitted with an axially distributed blood-tissue exchange model to examine the extraction process. The model fits of the albumin and sucrose outflow dilution curves were used to define flow heterogeneity, intravascular dispersion, capillary permeability, and the volume of the interstitial space, which reduced the degrees of freedom in fitting the model to the serotonin curves. Serotonin extractions, measured against albumin, duringmore » single transcapillary passage, ranged from 24 to 64%. The ratio of the capillary permeability-surface area products for serotonin and sucrose, based on the maximum instantaneous extraction, was 1.37 +/- 0.2 (n = 18), very close to the predicted value of 1.39, the ratio of free diffusion coefficients calculated from the molecular weights. This result shows that the observed uptake of serotonin can be accounted for solely on the basis of diffusion between endothelial cells into the interstitial space. Thus it appears that the permeability of the luminal surface of the endothelial cell is negligible in comparison to diffusion through the clefts between endothelial cells. In 18 sets of dilution curves, with and without receptor and transport blockers or competitors (ketanserin, desipramine, imipramine, serotonin), the extractions and estimates of the capillary permeability-surface area product were not reduced, nor were the volumes of distribution. The apparent absence of transporters and receptors in rabbit myocardial capillary endothelium contrasts with their known abundance in the pulmonary vasculature.« less
Three-dimensional simulation of human teeth and its application in dental education and research.
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.
Three-dimensional simulation of human teeth and its application in dental education and research
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
2011-01-01
Background Conservative scoliosis therapy according to the FITS Concept is applied as a unique treatment or in combination with corrective bracing. The aim of the study was to present author's method of diagnosis and therapy for idiopathic scoliosis FITS-Functional Individual Therapy of Scoliosis and to analyze the early results of FITS therapy in a series of consecutive patients. Methods The analysis comprised separately: (1) single structural thoracic, thoracolumbar or lumbar curves and (2) double structural scoliosis-thoracic and thoracolumbar or lumbar curves. The Cobb angle and Risser sign were analyzed at the initial stage and at the 2.8-year follow-up. The percentage of patients improved (defined as decrease of Cobb angle of more than 5 degrees), stable (+/- 5 degrees), and progressed (increase of Cobb angle of more than 5 degrees) was calculated. The clinical assessment comprised: the Angle of Trunk Rotation (ATR) initial and follow-up value, the plumb line imbalance, the scapulae level and the distance from the apical spinous process of the primary curve to the plumb line. Results In the Group A: (1) in single structural scoliosis 50,0% of patients improved, 46,2% were stable and 3,8% progressed, while (2) in double scoliosis 50,0% of patients improved, 30,8% were stable and 19,2% progressed. In the Group B: (1) in single scoliosis 20,0% of patients improved, 80,0% were stable, no patient progressed, while (2) in double scoliosis 28,1% of patients improved, 46,9% were stable and 25,0% progressed. Conclusion Best results were obtained in 10-25 degrees scoliosis which is a good indication to start therapy before more structural changes within the spine establish. PMID:22122964
A curve fitting method for solving the flutter equation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cooper, J. L.
1972-01-01
A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.
NASA Astrophysics Data System (ADS)
Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.
2018-02-01
An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
A novel approach for calculating shelf life of minimally processed vegetables.
Corbo, Maria Rosaria; Del Nobile, Matteo Alessandro; Sinigaglia, Milena
2006-01-15
Shelf life of minimally processed vegetables is often calculated by using the kinetic parameters of Gompertz equation as modified by Zwietering et al. [Zwietering, M.H., Jongenburger, F.M., Roumbouts, M., van't Riet, K., 1990. Modelling of the bacterial growth curve. Applied and Environmental Microbiology 56, 1875-1881.] taking 5x10(7) CFU/g as the maximum acceptable contamination value consistent with acceptable quality of these products. As this method does not allow estimation of the standard errors of the shelf life, in this paper the modified Gompertz equation was re-parameterized to directly include the shelf life as a fitting parameter among the Gompertz parameters. Being the shelf life a fitting parameter is possible to determine its confidence interval by fitting the proposed equation to the experimental data. The goodness-of-fit of this new equation was tested by using mesophilic bacteria cell loads from different minimally processed vegetables (packaged fresh-cut lettuce, fennel and shredded carrots) that differed for some process operations or for package atmosphere. The new equation was able to describe the data well and to estimate the shelf life. The results obtained emphasize the importance of using the standard errors for the shelf life value to show significant differences among the samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waszczak, Adam; Chang, Chan-Kao; Cheng, Yu-Chi
We fit 54,296 sparsely sampled asteroid light curves in the Palomar Transient Factory survey to a combined rotation plus phase-function model. Each light curve consists of 20 or more observations acquired in a single opposition. Using 805 asteroids in our sample that have reference periods in the literature, we find that the reliability of our fitted periods is a complicated function of the period, amplitude, apparent magnitude, and other light-curve attributes. Using the 805-asteroid ground-truth sample, we train an automated classifier to estimate (along with manual inspection) the validity of the remaining ∼53,000 fitted periods. By this method we findmore » that 9033 of our light curves (of ∼8300 unique asteroids) have “reliable” periods. Subsequent consideration of asteroids with multiple light-curve fits indicates a 4% contamination in these “reliable” periods. For 3902 light curves with sufficient phase-angle coverage and either a reliable fit period or low amplitude, we examine the distribution of several phase-function parameters, none of which are bimodal though all correlate with the bond albedo and with visible-band colors. Comparing the theoretical maximal spin rate of a fluid body with our amplitude versus spin-rate distribution suggests that, if held together only by self-gravity, most asteroids are in general less dense than ∼2 g cm{sup −3}, while C types have a lower limit of between 1 and 2 g cm{sup −3}. These results are in agreement with previous density estimates. For 5–20 km diameters, S types rotate faster and have lower amplitudes than C types. If both populations share the same angular momentum, this may indicate the two types’ differing ability to deform under rotational stress. Lastly, we compare our absolute magnitudes (and apparent-magnitude residuals) to those of the Minor Planet Center’s nominal (G = 0.15, rotation-neglecting) model; our phase-function plus Fourier-series fitting reduces asteroid photometric rms scatter by a factor of ∼3.« less
NASA Astrophysics Data System (ADS)
Ji, Zhong-Ye; Zhang, Xiao-Fang
2018-01-01
The mathematical relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam is important in beam quality control theory of the high-energy laser weapon system. In order to obtain this mathematical relation, numerical simulation is used in the research. Firstly, the Zernike representations of typically distorted atmospheric wavefront aberrations caused by the Kolmogoroff turbulence are generated. And then, the corresponding beam quality β factors of the different distorted wavefronts are calculated numerically through fast Fourier transform. Thus, the statistical distribution rule between the beam quality β factors of high-energy laser and the wavefront aberrations of the beam can be established by the calculated results. Finally, curve fitting method is chosen to establish the mathematical fitting relationship of these two parameters. And the result of the curve fitting shows that there is a quadratic curve relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam. And in this paper, 3 fitting curves, in which the wavefront aberrations are consisted of Zernike Polynomials of 20, 36, 60 orders individually, are established to express the relationship between the beam quality β factor and atmospheric wavefront aberrations with different spatial frequency.
NASA Astrophysics Data System (ADS)
Li, Xin; Tang, Li; Lin, Hai-Nan
2017-05-01
We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)
Doona, Christopher J; Feeherry, Florence E; Ross, Edward W
2005-04-15
Predictive microbial models generally rely on the growth of bacteria in laboratory broth to approximate the microbial growth kinetics expected to take place in actual foods under identical environmental conditions. Sigmoidal functions such as the Gompertz or logistics equation accurately model the typical microbial growth curve from the lag to the stationary phase and provide the mathematical basis for estimating parameters such as the maximum growth rate (MGR). Stationary phase data can begin to show a decline and make it difficult to discern which data to include in the analysis of the growth curve, a factor that influences the calculated values of the growth parameters. In contradistinction, the quasi-chemical kinetics model provides additional capabilities in microbial modelling and fits growth-death kinetics (all four phases of the microbial lifecycle continuously) for a general set of microorganisms in a variety of actual food substrates. The quasi-chemical model is differential equations (ODEs) that derives from a hypothetical four-step chemical mechanism involving an antagonistic metabolite (quorum sensing) and successfully fits the kinetics of pathogens (Staphylococcus aureus, Escherichia coli and Listeria monocytogenes) in various foods (bread, turkey meat, ham and cheese) as functions of different hurdles (a(w), pH, temperature and anti-microbial lactate). The calculated value of the MGR depends on whether growth-death data or only growth data are used in the fitting procedure. The quasi-chemical kinetics model is also exploited for use with the novel food processing technology of high-pressure processing. The high-pressure inactivation kinetics of E. coli are explored in a model food system over the pressure (P) range of 207-345 MPa (30,000-50,000 psi) and the temperature (T) range of 30-50 degrees C. In relatively low combinations of P and T, the inactivation curves are non-linear and exhibit a shoulder prior to a more rapid rate of microbial destruction. In the higher P, T regime, the inactivation plots tend to be linear. In all cases, the quasi-chemical model successfully fit the linear and curvi-linear inactivation plots for E. coli in model food systems. The experimental data and the quasi-chemical mathematical model described herein are candidates for inclusion in ComBase, the developing database that combines data and models from the USDA Pathogen Modeling Program and the UK Food MicroModel.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
NASA Astrophysics Data System (ADS)
Vieira, Daniel; Krems, Roman
2017-04-01
Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.
A New Model Based on Adaptation of the External Loop to Compensate the Hysteresis of Tactile Sensors
Sánchez-Durán, José A.; Vidal-Verdú, Fernando; Oballe-Peinado, Óscar; Castellanos-Ramos, Julián; Hidalgo-López, José A.
2015-01-01
This paper presents a novel method to compensate for hysteresis nonlinearities observed in the response of a tactile sensor. The External Loop Adaptation Method (ELAM) performs a piecewise linear mapping of the experimentally measured external curves of the hysteresis loop to obtain all possible internal cycles. The optimal division of the input interval where the curve is approximated is provided by the error minimization algorithm. This process is carried out off line and provides parameters to compute the split point in real time. A different linear transformation is then performed at the left and right of this point and a more precise fitting is achieved. The models obtained with the ELAM method are compared with those obtained from three other approaches. The results show that the ELAM method achieves a more accurate fitting. Moreover, the involved mathematical operations are simpler and therefore easier to implement in devices such as Field Programmable Gate Array (FPGAs) for real time applications. Furthermore, the method needs to identify fewer parameters and requires no previous selection process of operators or functions. Finally, the method can be applied to other sensors or actuators with complex hysteresis loop shapes. PMID:26501279
Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping
2003-05-01
In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.
Study on peak shape fitting method in radon progeny measurement.
Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju
2015-11-01
Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.
Roman, A; Ahmed, K; Challacombe, B
2016-05-01
Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Griscom, David L.
2001-11-01
Formalisms have been developed to express the time evolution of bimolecular processes taking place in fractal spaces. These ``stretched-second-order'' solutions are specifically applicable to radiation-induced electron-hole pairs and/or vacancy-interstitial pairs in insulating glasses. Like the analogous Kohlrausch-type (stretched-first-order) expressions, the present solutions are functions of (kt)β, where 0<β<1, k is an effective rate coefficient, and t is time. Both the new second-order formalism and the familiar Kohlrausch approach have been used to fit experimental data (induced optical absorptions in silica-based glasses monitored at selected wavelengths) that serve as proxies for the numbers of color centers created by γ irradiation and/or destroyed by processes involving thermal, optical, or γ-ray activation. Two material systems were investigated: (1) optical fibers with Ge-doped-silica cores and (2) fibers with low-OH/low-chloride pure-silica cores. Successful fits of the growth curves for the Ge-doped-silica-core fibers at four widely separated dose rates were accomplished using solutions for color-center concentrations, N[(kt)β], which approach steady-state values, Nsat, as t-->∞. The parametrization of these fits reveals some unexpected, and potentially useful, empirical rules regarding the dose-rate dependences of β, k, and Nsat in the fractal regime (0<β<1). Similar, though possibly not identical, rules evidently apply to color centers in the pure-silica-core fibers as well. In both material systems, there appear to be fractal<==> classical phase transitions at certain threshold values of dose rate, below which the dose-rate dependencies of k and Nsat revert to those specified by classical (β=1) first- or second-order kinetics. For kt<<1, both the first- and second-order fractal kinetic growth curves become identical, i.e., N((kt)β)~Atβ, where the coefficient A depends on dose rate but not kinetic order. It is found empirically that A depends on the 3β/2 power of dose rate in both first- and second-order kinetics, thus ``accidentally'' becoming linearly proportional to dose rate in cases where β~2/3 (characteristic of random fractals and many disordered materials). If interfering dose-rate-independent components are absent, it is possible to distinguish the order of the kinetics from the shapes of the growth and decay curves in both fractal and classical regimes. However, for reasons that are discussed, the parameters that successfully fit the experimental growth curves could not be used as bases for closed-form predictions of the shapes of the decay curves recorded when the irradiation is interrupted.
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
Liu, Yongliang; Thibodeaux, Devron; Gamble, Gary; Bauer, Philip; VanDerveer, Don
2012-08-01
Despite considerable efforts in developing curve-fitting protocols to evaluate the crystallinity index (CI) from X-ray diffraction (XRD) measurements, in its present state XRD can only provide a qualitative or semi-quantitative assessment of the amounts of crystalline or amorphous fraction in a sample. The greatest barrier to establishing quantitative XRD is the lack of appropriate cellulose standards, which are needed to calibrate the XRD measurements. In practice, samples with known CI are very difficult to prepare or determine. In a previous study, we reported the development of a simple algorithm for determining fiber crystallinity information from Fourier transform infrared (FT-IR) spectroscopy. Hence, in this study we not only compared the fiber crystallinity information between FT-IR and XRD measurements, by developing a simple XRD algorithm in place of a time-consuming and subjective curve-fitting process, but we also suggested a direct way of determining cotton cellulose CI by calibrating XRD with the use of CI(IR) as references.
Kepler Uniform Modeling of KOIs: MCMC Notes for Data Release 25
NASA Technical Reports Server (NTRS)
Hoffman, Kelsey L.; Rowe, Jason F.
2017-01-01
This document describes data products related to the reported planetary parameters and uncertainties for the Kepler Objects of Interest (KOIs) based on a Markov-Chain-Monte-Carlo (MCMC) analysis. Reported parameters, uncertainties and data products can be found at the NASA Exoplanet Archive . The codes used for this data analysis are available on the Github website (Rowe 2016). The relevant paper for details of the calculations is Rowe et al. (2015). The main differences between the model fits discussed here and those in the DR24 catalogue are that the DR25 light curves were used in the analysis, our processing of the MAST light curves took into account different data flags, the number of chains calculated was doubled to 200 000, and the parameters which are reported are based on a damped least-squares fit, instead of the median value from the Markov chain or the chain with the lowest 2 as reported in the past.
NASA Astrophysics Data System (ADS)
Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan
2014-09-01
A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.
Hybrid Micro-Electro-Mechanical Tunable Filter
2007-09-01
Figure 2.10), one can see the developers have used surface micromachining techniques to build the micromirror structure over the CMOS addressing...DBRs, microcavity composition, initial air gap, contact layers, substrate Dispersion Data Curve -fit dispersion data or generate dispersion function...measurements • Curve -fit the dispersion data or generate a continuous, wavelength-dependent, representation of material dispersion • Manually design the
Consideration of Wear Rates at High Velocities
2010-03-01
Strain vs. Three-dimensional Model . . . . . . . . . . . . 57 3.11 Example Single Asperity Wear Rate Integral . . . . . . . . . . 58 4.1 Third Stage...Slipper Accumulated Frictional Heating . . . . . . 67 4.2 Surface Temperature Third Stage Slipper, ave=0.5 . . . . . . . 67 4.3 Melt Depth Example...64 A3S Coefficient for Frictional Heat Curve Fit, Third Stage Slipper 66 B3S Coefficient for Frictional Heat Curve Fit, Third
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.
Free-form surface measuring method based on optical theodolite measuring system
NASA Astrophysics Data System (ADS)
Yu, Caili
2012-10-01
The measurement for single-point coordinate, length and large-dimension curved surface in industrial measurement can be achieved through forward intersection measurement by the theodolite measuring system composed of several optical theodolites and one computer. The measuring principle of flexible large-dimension three-coordinate measuring system made up of multiple (above two) optical theodolites and composition and functions of the system have been introduced in this paper. Especially for measurement of curved surface, 3D measured data of spatial free-form surface is acquired through the theodolite measuring system and the CAD model is formed through surface fitting to directly generate CAM processing data.
Using quasars as standard clocks for measuring cosmological redshift.
Dai, De-Chang; Starkman, Glenn D; Stojkovic, Branislav; Stojkovic, Dejan; Weltman, Amanda
2012-06-08
We report hitherto unnoticed patterns in quasar light curves. We characterize segments of the quasar's light curves with the slopes of the straight lines fit through them. These slopes appear to be directly related to the quasars' redshifts. Alternatively, using only global shifts in time and flux, we are able to find significant overlaps between the light curves of different pairs of quasars by fitting the ratio of their redshifts. We are then able to reliably determine the redshift of one quasar from another. This implies that one can use quasars as standard clocks, as we explicitly demonstrate by constructing two independent methods of finding the redshift of a quasar from its light curve.
J. Chris Toney; Karen G. Schleeweis; Jennifer Dungan; Andrew Michaelis; Todd Schroeder; Gretchen G. Moisen
2015-01-01
The North American Forest Dynamics (NAFD) projectâs Attribution Team is completing nationwide processing of historic Landsat data to provide a comprehensive annual, wall-to-wall analysis of US disturbance history, with attribution, over the last 25+ years. Per-pixel time series analysis based on a new nonparametric curve fitting algorithm yields several metrics useful...
Uncertainty Analysis Principles and Methods
2007-09-01
error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brandon D., E-mail: bradenis@umich.edu; Boyd, Iain D.
The sputtering of hexagonal boron nitride (h-BN) by impacts of energetic xenon ions is investigated using a molecular dynamics (MD) model. The model is implemented within an open-source MD framework that utilizes graphics processing units to accelerate its calculations, allowing the sputtering process to be studied in much greater detail than has been feasible in the past. Integrated sputter yields are computed over a range of ion energies from 20 eV to 300 eV, and incidence angles from 0° to 75°. Sputtering of boron is shown to occur at energies as low as 40 eV at normal incidence, and sputtering of nitrogen atmore » as low as 30 eV at normal incidence, suggesting a threshold energy between 20 eV and 40 eV. The sputter yields at 0° incidence are compared to existing experimental data and are shown to agree well over the range of ion energies investigated. The semi-empirical Bohdansky curve and an empirical exponential function are fit to the data at normal incidence, and the threshold energy for sputtering is calculated from the Bohdansky curve fit as 35 ± 2 eV. These results are shown to compare well with experimental observations that the threshold energy lies between 20 eV and 40 eV. It is demonstrated that h-BN sputters predominantly as atomic boron and diatomic nitrogen, and the velocity distribution function (VDF) of sputtered boron atoms is investigated. The calculated VDFs are found to reproduce the Sigmund-Thompson distribution predicted by Sigmund's linear cascade theory of sputtering. The average surface binding energy computed from Sigmund-Thompson curve fits is found to be 4.5 eV for ion energies of 100 eV and greater. This compares well to the value of 4.8 eV determined from independent experiments.« less
Surface fitting three-dimensional bodies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1974-01-01
The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.
Focusing of light through turbid media by curve fitting optimization
NASA Astrophysics Data System (ADS)
Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi
2016-12-01
The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.
ERIC Educational Resources Information Center
Ferrer, Emilio; Hamagami, Fumiaki; McArdle, John J.
2004-01-01
This article offers different examples of how to fit latent growth curve (LGC) models to longitudinal data using a variety of different software programs (i.e., LISREL, Mx, Mplus, AMOS, SAS). The article shows how the same model can be fitted using both structural equation modeling and multilevel software, with nearly identical results, even in…
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2009-01-01
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
NASA Astrophysics Data System (ADS)
Gentile, G.; Famaey, B.; de Blok, W. J. G.
2011-03-01
We present an analysis of 12 high-resolution galactic rotation curves from The HI Nearby Galaxy Survey (THINGS) in the context of modified Newtonian dynamics (MOND). These rotation curves were selected to be the most reliable for mass modelling, and they are the highest quality rotation curves currently available for a sample of galaxies spanning a wide range of luminosities. We fit the rotation curves with the "simple" and "standard" interpolating functions of MOND, and we find that the "simple" function yields better results. We also redetermine the value of a0, and find a median value very close to the one determined in previous studies, a0 = (1.22 ± 0.33) × 10-8 cm s-2. Leaving the distance as a free parameter within the uncertainty of its best independently determined value leads to excellent quality fits for 75% of the sample. Among the three exceptions, two are also known to give relatively poor fits in Newtonian dynamics plus dark matter. The remaining case (NGC 3198) presents some tension between the observations and the MOND fit, which might, however, be explained by the presence of non-circular motions, by a small distance, or by a value of a0 at the lower end of our best-fit interval, 0.9 × 10-8 cm s-2. The best-fit stellar M/L ratios are generally in remarkable agreement with the predictions of stellar population synthesis models. We also show that the narrow range of gravitational accelerations found to be generated by dark matter in galaxies is consistent with the narrow range of additional gravity predicted by MOND.
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
1996-01-01
Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.
Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei
2015-04-10
Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current j(ion) = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.
A Statistical Approach to Identify Superluminous Supernovae and Probe Their Diversity
NASA Astrophysics Data System (ADS)
Inserra, C.; Prajs, S.; Gutierrez, C. P.; Angus, C.; Smith, M.; Sullivan, M.
2018-02-01
We investigate the identification of hydrogen-poor superluminous supernovae (SLSNe I) using a photometric analysis, without including an arbitrary magnitude threshold. We assemble a homogeneous sample of previously classified SLSNe I from the literature, and fit their light curves using Gaussian processes. From the fits, we identify four photometric parameters that have a high statistical significance when correlated, and combine them in a parameter space that conveys information on their luminosity and color evolution. This parameter space presents a new definition for SLSNe I, which can be used to analyze existing and future transient data sets. We find that 90% of previously classified SLSNe I meet our new definition. We also examine the evidence for two subclasses of SLSNe I, combining their photometric evolution with spectroscopic information, namely the photospheric velocity and its gradient. A cluster analysis reveals the presence of two distinct groups. “Fast” SLSNe show fast light curves and color evolution, large velocities, and a large velocity gradient. “Slow” SLSNe show slow light curve and color evolution, small expansion velocities, and an almost non-existent velocity gradient. Finally, we discuss the impact of our analyses in the understanding of the powering engine of SLSNe, and their implementation as cosmological probes in current and future surveys.
NASA Astrophysics Data System (ADS)
Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei
2015-04-01
Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current jion = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.
From Experiment to Theory: What Can We Learn from Growth Curves?
Kareva, Irina; Karev, Georgy
2018-01-01
Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.
NASA Astrophysics Data System (ADS)
Mahabadi, Nariman; Dai, Sheng; Seol, Yongkoo; Sup Yun, Tae; Jang, Jaewon
2016-08-01
The water retention curve and relative permeability are critical to predict gas and water production from hydrate-bearing sediments. However, values for key parameters that characterize gas and water flows during hydrate dissociation have not been identified due to experimental challenges. This study utilizes the combined techniques of micro-focus X-ray computed tomography (CT) and pore-network model simulation to identify proper values for those key parameters, such as gas entry pressure, residual water saturation, and curve fitting values. Hydrates with various saturation and morphology are realized in the pore-network that was extracted from micron-resolution CT images of sediments recovered from the hydrate deposit at the Mallik site, and then the processes of gas invasion, hydrate dissociation, gas expansion, and gas and water permeability are simulated. Results show that greater hydrate saturation in sediments lead to higher gas entry pressure, higher residual water saturation, and steeper water retention curve. An increase in hydrate saturation decreases gas permeability but has marginal effects on water permeability in sediments with uniformly distributed hydrate. Hydrate morphology has more significant impacts than hydrate saturation on relative permeability. Sediments with heterogeneously distributed hydrate tend to result in lower residual water saturation and higher gas and water permeability. In this sense, the Brooks-Corey model that uses two fitting parameters individually for gas and water permeability properly capture the effect of hydrate saturation and morphology on gas and water flows in hydrate-bearing sediments.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Development of a program to fit data to a new logistic model for microbial growth.
Fujikawa, Hiroshi; Kano, Yoshihiro
2009-06-01
Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.
Zhu, Mingping; Chen, Aiqing
2017-01-01
This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580
A Multi-year Multi-passband CCD Photometric Study of the W UMa Binary EQ Tauri
NASA Astrophysics Data System (ADS)
Alton, K. B.
2009-12-01
A revised ephemeris and updated orbital period for EQ Tau have been determined from newly acquired (2007-2009) CCD-derived photometric data. A Roche-type model based on the Wilson-Devinney code produced simultaneous theoretical fits of light curve data in three passbands by invoking cold spots on the primary component. These new model fits, along with similar light curve data for EQ Tau collected during the previous six seasons (2000-2006), provided a rare opportunity to follow the seasonal appearance of star spots on a W UMa binary system over nine consecutive years. Fixed values for q, ?1,2, T1, T2, and i based upon the mean of eleven separately determined model fits produced for this system are hereafter proposed for future light curve modeling of EQ Tau. With the exception of the 2001 season all other light curves produced since then required a spotted solution to address the flux asymmetry exhibited by this binary system at Max I and Max II. At least one cold spot on the primary appears in seven out of twelve light curves for EQ Tau produced over the last nine years, whereas in six instances two cold spots on the primary star were invoked to improve the model fit. Solutions using a hot spot were less common and involved positioning a single spot on the primary constituent during the 2001-2002, 2002-2003, and 2005-2006 seasons.
Method and apparatus for air-coupled transducer
NASA Technical Reports Server (NTRS)
Song, Junho (Inventor); Chimenti, Dale E. (Inventor)
2010-01-01
An air-coupled transducer includes a ultrasonic transducer body having a radiation end with a backing fixture at the radiation end. There is a flexible backplate conformingly fit to the backing fixture and a thin membrane (preferably a metallized polymer) conformingly fit to the flexible backplate. In one embodiment, the backing fixture is spherically curved and the flexible backplate is spherically curved. The flexible backplate is preferably patterned with pits or depressions.
Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.
Franco, R; Aran, J M; Canela, E I
1991-01-01
A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914
Tromberg, B.J.; Tsay, T.T.; Berns, M.W.; Svaasand, L.O.; Haskell, R.C.
1995-06-13
Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid. 14 figs.
Tromberg, Bruce J.; Tsay, Tsong T.; Berns, Michael W.; Svaasand, Lara O.; Haskell, Richard C.
1995-01-01
Optical measurements of turbid media, that is media characterized by multiple light scattering, is provided through an apparatus and method for exposing a sample to a modulated laser beam. The light beam is modulated at a fundamental frequency and at a plurality of integer harmonics thereof. Modulated light is returned from the sample and preferentially detected at cross frequencies at frequencies slightly higher than the fundamental frequency and at integer harmonics of the same. The received radiance at the beat or cross frequencies is compared against a reference signal to provide a measure of the phase lag of the radiance and modulation ratio relative to a reference beam. The phase and modulation amplitude are then provided as a frequency spectrum by an array processor to which a computer applies a complete curve fit in the case of highly scattering samples or a linear curve fit below a predetermined frequency in the case of highly absorptive samples. The curve fit in any case is determined by the absorption and scattering coefficients together with a concentration of the active substance in the sample. Therefore, the curve fitting to the frequency spectrum can be used both for qualitative and quantitative analysis of substances in the sample even though the sample is highly turbid.
Fitting Photometry of Blended Microlensing Events
NASA Astrophysics Data System (ADS)
Thomas, Christian L.; Griest, Kim
2006-03-01
We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
NASA Astrophysics Data System (ADS)
Salim, Samir; Boquien, Médéric; Lee, Janice C.
2018-05-01
We study the dust attenuation curves of 230,000 individual galaxies in the local universe, ranging from quiescent to intensely star-forming systems, using GALEX, SDSS, and WISE photometry calibrated on the Herschel ATLAS. We use a new method of constraining SED fits with infrared luminosity (SED+LIR fitting), and parameterized attenuation curves determined with the CIGALE SED-fitting code. Attenuation curve slopes and UV bump strengths are reasonably well constrained independently from one another. We find that {A}λ /{A}V attenuation curves exhibit a very wide range of slopes that are on average as steep as the curve slope of the Small Magellanic Cloud (SMC). The slope is a strong function of optical opacity. Opaque galaxies have shallower curves—in agreement with recent radiative transfer models. The dependence of slopes on the opacity produces an apparent dependence on stellar mass: more massive galaxies have shallower slopes. Attenuation curves exhibit a wide range of UV bump amplitudes, from none to Milky Way (MW)-like, with an average strength one-third that of the MW bump. Notably, local analogs of high-redshift galaxies have an average curve that is somewhat steeper than the SMC curve, with a modest UV bump that can be, to first order, ignored, as its effect on the near-UV magnitude is 0.1 mag. Neither the slopes nor the strengths of the UV bump depend on gas-phase metallicity. Functional forms for attenuation laws are presented for normal star-forming galaxies, high-z analogs, and quiescent galaxies. We release the catalog of associated star formation rates and stellar masses (GALEX–SDSS–WISE Legacy Catalog 2).
Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2
NASA Astrophysics Data System (ADS)
Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.
2017-12-01
In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.
NASA Astrophysics Data System (ADS)
Navascues, M. A.; Sebastian, M. V.
Fractal interpolants of Barnsley are defined for any continuous function defined on a real compact interval. The uniform distance between the function and its approximant is bounded in terms of the vertical scale factors. As a general result, the density of the affine fractal interpolation functions of Barnsley in the space of continuous functions in a compact interval is proved. A method of data fitting by means of fractal interpolation functions is proposed. The procedure is applied to the quantification of cognitive brain processes. In particular, the increase in the complexity of the electroencephalographic signal produced by the execution of a test of visual attention is studied. The experiment was performed on two types of children: a healthy control group and a set of children diagnosed with an attention deficit disorder.
Fukui, Atsuko; Fujii, Ryuta; Yonezawa, Yorinobu; Sunada, Hisakazu
2004-03-01
The release properties of phenylpropanolamine hydrochloride (PPA) from ethylcellulose (EC) matrix granules prepared by an extrusion granulation method were examined. The release process could be divided into two parts; the first and second stages were analyzed by applying square-root time law and cube-root law equations, respectively. The validity of the treatments was confirmed by the fitness of a simulation curve with the measured curve. In the first stage, PPA was released from the gel layer of swollen EC in the matrix granules. In the second stage, the drug existing below the gel layer dissolved and was released through the gel layer. The effect of the binder solution on the release from EC matrix granules was also examined. The binder solutions were prepared from various EC and ethanol (EtOH) concentrations. The media changed from a good solvent to a poor solvent with decreasing EtOH concentration. The matrix structure changed from loose to compact with increasing EC concentration. The preferable EtOH concentration region was observed when the release process was easily predictable. The time and release ratio at the connection point of the simulation curves were also examined to determine the validity of the analysis.
Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data
NASA Astrophysics Data System (ADS)
Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.
2014-06-01
We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and diagnostic figures, are included in the DV report and one-page report summary, which are accessible by the science community at NASA Exoplanet Archive. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
NASA Astrophysics Data System (ADS)
Magri, Alphonso William
This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G
2004-02-01
Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.
Observational evidence of dust evolution in galactic extinction curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecchi-Pestellini, Cesare; Casu, Silvia; Mulas, Giacomo
Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds.more » Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.« less
UTM, a universal simulator for lightcurves of transiting systems
NASA Astrophysics Data System (ADS)
Deeg, Hans
2009-02-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. Applications of UTM to date have been mainly in the generation of light-curves for the testing of detection algorithms. For the preparation of such test for the Corot Mission, a special version has been used to generate multicolour light-curves in Corot's passbands. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
The effect of semirigid dressings on below-knee amputations.
MacLean, N; Fick, G H
1994-07-01
The effect of using semirigid dressings (SRDs) on the residual limb of individuals who have had below-knee amputations as a consequence of peripheral vascular disease was investigated, with the primary question being: Does the time to readiness for prosthetic fitting for patients treated with the SRDs differ from that of patients treated with soft dressings? Forty patients entered the study and were alternately assigned to one of two groups. Nineteen patients were assigned to the SRD group, and 21 patients were assigned to the soft dressing group. The time from surgery to readiness for prosthetic fitting was recorded for each patient. Kaplan-Meier survival curves were generated for each group, and the results were analyzed with the log-rank test. There was a difference between the two curves, and an examination of the curves suggests that the expected time to readiness for prosthetic fitting for patients treated with the SRDs would be less than half that of patients treated with soft dressings. The results suggest that a patient may be ready for prosthetic fitting sooner if treated with SRDs instead of soft dressings.
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
Derivation of error sources for experimentally derived heliostat shapes
NASA Astrophysics Data System (ADS)
Cumpston, Jeff; Coventry, Joe
2017-06-01
Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.
de Oliveira, Thales Leandro Coutinho; Soares, Rodrigo de Araújo; Piccoli, Roberta Hilsdorf
2013-03-01
The antimicrobial effect of oregano (Origanum vulgare L.) and lemongrass (Cymbopogon citratus (DC.) Stapf.) essential oils (EOs) against Salmonella enterica serotype Enteritidis in in vitro experiments, and inoculated in ground bovine meat during refrigerated storage (4±2 °C) for 6 days was evaluated. The Weibull model was tested to fit survival/inactivation bacterial curves (estimating of p and δ parameters). The minimum inhibitory concentration (MIC) value for both EOs on S. Enteritidis was 3.90 μl/ml. The EO concentrations applied in the ground beef were 3.90, 7.80 and 15.60 μl/g, based on MIC levels and possible activity reduction by food constituents. Both evaluated EOs in all tested levels, showed antimicrobial effects, with microbial populations reducing (p≤0.05) along time storage. Evaluating fit-quality parameters (RSS and RSE) Weibull models are able to describe the inactivation curves of EOs against S. Enteritidis. The application of EOs in processed meats can be used to control pathogens during refrigerated shelf-life. Copyright © 2012 Elsevier Ltd. All rights reserved.
Brouwer, Darren H
2013-01-01
An algorithm is presented for solving the structures of silicate network materials such as zeolites or layered silicates from solid-state (29)Si double-quantum NMR data for situations in which the crystallographic space group is not known. The algorithm is explained and illustrated in detail using a hypothetical two-dimensional network structure as a working example. The algorithm involves an atom-by-atom structure building process in which candidate partial structures are evaluated according to their agreement with Si-O-Si connectivity information, symmetry restraints, and fits to (29)Si double quantum NMR curves followed by minimization of a cost function that incorporates connectivity, symmetry, and quality of fit to the double quantum curves. The two-dimensional network material is successfully reconstructed from hypothetical NMR data that can be reasonably expected to be obtained for real samples. This advance in "NMR crystallography" is expected to be important for structure determination of partially ordered silicate materials for which diffraction provides very limited structural information. Copyright © 2013 Elsevier Inc. All rights reserved.
Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark
2014-06-01
The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca
2012-07-15
Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less
NASA Astrophysics Data System (ADS)
Valença, J. V. B.; Silveira, I. S.; Silva, A. C. A.; Dantas, N. O.; Antonio, P. L.; Caldas, L. V. E.; d'Errico, F.; Souza, S. O.
2017-11-01
The OSL characteristics of three different borate glass matrices containing magnesia (LMB), quicklime (LCB) or potassium carbonate (LKB) were examined. Five different formulations for each composition were produced using a melt-quenching method and analyzed in terms of both dose-response curves and OSL shape decay. The samples were irradiated using a 90Sr/90Y beta source with doses up to 30 Gy. Dose-response curves were plotted using the initial OSL intensity as the chosen parameter. The OSL analysis showed that LKB glasses are the most sensitive to beta irradiation. For the most sensitive LKB composition, the irradiation process was also done using a 60Co gamma source in a dose range from 200 to 800 Gy. In all cases, no saturation was observed. A fitting process using a three-term exponential function was performed for the most sensitive formulations of each composition, which suggested a similar behavior in the OSL decay.
Mattucci, Stephen F E; Cronin, Duane S
2015-01-01
Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
Forgetting Curves: Implications for Connectionist Models
ERIC Educational Resources Information Center
Sikstrom, Sverker
2002-01-01
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…
Nonlinear Growth Models in M"plus" and SAS
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam
2009-01-01
Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…
FTOOLS: A FITS Data Processing and Analysis Software Package
NASA Astrophysics Data System (ADS)
Blackburn, J. K.
FTOOLS, a highly modular collection of over 110 utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Science Archive Research Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities specific to high energy astrophysics data sets used for the ASCA, ROSAT, GRO, and XTE missions. A core set of FTOOLS providing support for generic FITS data processing, FITS image analysis and timing analysis can easily be split out of the full software package for users not needing the high energy astrophysics mission utilities. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and \\fortran to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
On the Early-Time Excess Emission in Hydrogen-Poor Superluminous Supernovae
NASA Technical Reports Server (NTRS)
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; De Cia, Annalisa; Perley, Daniel A.; Quimby, Robert M.; Waldman, Roni; Sullivan, Mark; Yan, Lin; Ofek, Eran O.;
2017-01-01
We present the light curves of the hydrogen-poor super-luminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (approximately 10 days) and brightness relative to the main peak (23 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (greater than 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.
On The Early-Time Excess Emission In Hydrogen-Poor Superluminous Supernovae
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay; ...
2017-01-18
Here, we present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (~10 days) and brightness relative to the main peak (2-3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration ( > 30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of amore » different nature. We construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of 56Ni and 56Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less
ON THE EARLY-TIME EXCESS EMISSION IN HYDROGEN-POOR SUPERLUMINOUS SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vreeswijk, Paul M.; Leloudas, Giorgos; Gal-Yam, Avishay
2017-01-20
We present the light curves of the hydrogen-poor superluminous supernovae (SLSNe I) PTF 12dam and iPTF 13dcc, discovered by the (intermediate) Palomar Transient Factory. Both show excess emission at early times and a slowly declining light curve at late times. The early bump in PTF 12dam is very similar in duration (∼10 days) and brightness relative to the main peak (2–3 mag fainter) compared to that observed in other SLSNe I. In contrast, the long-duration (>30 days) early excess emission in iPTF 13dcc, whose brightness competes with that of the main peak, appears to be of a different nature. Wemore » construct bolometric light curves for both targets, and fit a variety of light-curve models to both the early bump and main peak in an attempt to understand the nature of these explosions. Even though the slope of the late-time decline in the light curves of both SLSNe is suggestively close to that expected from the radioactive decay of {sup 56}Ni and {sup 56}Co, the amount of nickel required to power the full light curves is too large considering the estimated ejecta mass. The magnetar model including an increasing escape fraction provides a reasonable description of the PTF 12dam observations. However, neither the basic nor the double-peaked magnetar model is capable of reproducing the light curve of iPTF 13dcc. A model combining a shock breakout in an extended envelope with late-time magnetar energy injection provides a reasonable fit to the iPTF 13dcc observations. Finally, we find that the light curves of both PTF 12dam and iPTF 13dcc can be adequately fit with the model involving interaction with the circumstellar medium.« less
NASA Astrophysics Data System (ADS)
Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed
2016-12-01
The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.
The relationship between offspring size and fitness: integrating theory and empiricism.
Rollinson, Njal; Hutchings, Jeffrey A
2013-02-01
How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.
On the Methodology of Studying Aging in Humans
1961-01-01
prediction of death rates The relation of death rate to age has been extensively studied for over 100 years. As an illustration recent death rates for...log death rates appear to be linear, the simpler Gompertz curve fits closely. While on this subject of the Makeham-Gompertz function, it should be...Makeham-Gompertz curve to 5 year age specific death rates . Each fitting provided estimates of the parameters a, {j, and log c for each of the five year
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Waveform fitting and geometry analysis for full-waveform lidar feature extraction
NASA Astrophysics Data System (ADS)
Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu
2016-10-01
This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.
NASA Astrophysics Data System (ADS)
Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo
2018-03-01
Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.
Physical fitness reference standards in fibromyalgia: The al-Ándalus project.
Álvarez-Gallardo, I C; Carbonell-Baeza, A; Segura-Jiménez, V; Soriano-Maldonado, A; Intemann, T; Aparicio, V A; Estévez-López, F; Camiletti-Moirón, D; Herrador-Colmenero, M; Ruiz, J R; Delgado-Fernández, M; Ortega, F B
2017-11-01
We aimed (1) to report age-specific physical fitness levels in people with fibromyalgia of a representative sample from Andalusia; and (2) to compare the fitness levels of people with fibromyalgia with non-fibromyalgia controls. This cross-sectional study included 468 (21 men) patients with fibromyalgia and 360 (55 men) controls. The fibromyalgia sample was geographically representative from southern Spain. Physical fitness was assessed with the Senior Fitness Test battery plus the handgrip test. We applied the Generalized Additive Model for Location, Scale and Shape to calculate percentile curves for women and fitted mean curves using a linear regression for men. Our results show that people with fibromyalgia reached worse performance in all fitness tests than controls (P < 0.001) in all age ranges (P < 0.001). This study provides a comprehensive description of age-specific physical fitness levels among patients with fibromyalgia and controls in a large sample of patients with fibromyalgia from southern of Spain. Physical fitness levels of people with fibromyalgia from Andalusia are very low in comparison with age-matched healthy controls. This information could be useful to correctly interpret physical fitness assessments and helping health care providers to identify individuals at risk for losing physical independence. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Ling, Ying; Zhang, Minqiang; Locke, Kenneth D; Li, Guangming; Li, Zonglong
2016-01-01
The Circumplex Scales of Interpersonal Values (CSIV) is a 64-item self-report measure of goals from each octant of the interpersonal circumplex. We used item response theory methods to compare whether dominance models or ideal point models best described how people respond to CSIV items. Specifically, we fit a polytomous dominance model called the generalized partial credit model and an ideal point model of similar complexity called the generalized graded unfolding model to the responses of 1,893 college students. The results of both graphical comparisons of item characteristic curves and statistical comparisons of model fit suggested that an ideal point model best describes the process of responding to CSIV items. The different models produced different rank orderings of high-scoring respondents, but overall the models did not differ in their prediction of criterion variables (agentic and communal interpersonal traits and implicit motives).
Nongaussian distribution curve of heterophorias among children.
Letourneau, J E; Giroux, R
1991-02-01
The purpose of this study was to measure the distribution curve of horizontal and vertical phorias among children. Kolmogorov-Smirnov goodness of fit tests showed that these distribution curves were not Gaussian among (N = 2048) 6- to 13-year-old children. The distribution curve of horizontal phoria at far and of vertical phorias at far and at near were leptokurtic; the distribution curve of horizontal phoria at near was platykurtic. No variation of the distribution curve of heterophorias with age was observed. Comparisons of any individual findings with the general distribution curve should take the nonGaussian distribution curve of heterophorias into account.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
Analytical model for release calculations in solid thin-foils ISOL targets
NASA Astrophysics Data System (ADS)
Egoriti, L.; Boeckx, S.; Ghys, L.; Houngbo, D.; Popescu, L.
2016-10-01
A detailed analytical model has been developed to simulate isotope-release curves from thin-foils ISOL targets. It involves the separate modeling of diffusion and effusion inside the target. The former has been modeled using both first and second Fick's law. The latter, effusion from the surface of the target material to the end of the ionizer, was simulated with the Monte Carlo code MolFlow+. The calculated delay-time distribution for this process was then fitted using a double-exponential function. The release curve obtained from the convolution of diffusion and effusion shows good agreement with experimental data from two different target geometries used at ISOLDE. Moreover, the experimental yields are well reproduced when combining the release fraction with calculated in-target production.
Self-Regulation and Recall: Growth Curve Modeling of Intervention Outcomes for Older Adults
West, Robin L.; Hastings, Erin C.
2013-01-01
Memory training has often been supported as a potential means to improve performance for older adults. Less often studied are the characteristics of trainees that benefit most from training. Using a self-regulatory perspective, the current project examined a latent growth curve model to predict training-related gains for middle-aged and older adult trainees from individual differences (e.g., education), information processing skills (strategy use) and self-regulatory factors such as self-efficacy, control, and active engagement in training. For name recall, a model including strategy usage and strategy change as predictors of memory gain, along with self-efficacy and self-efficacy change, showed comparable fit to a more parsimonious model including only self-efficacy variables as predictors. The best fit to the text recall data was a model focusing on self-efficacy change as the main predictor of memory change, and that model showed significantly better fit than a model also including strategy usage variables as predictors. In these models, overall performance was significantly predicted by age and memory self-efficacy, and subsequent training-related gains in performance were best predicted directly by change in self-efficacy (text recall), or indirectly through the impact of active engagement and self-efficacy on gains (name recall). These results underscore the benefits of targeting self-regulatory factors in intervention programs designed to improve memory skills. PMID:21604891
Self-regulation and recall: growth curve modeling of intervention outcomes for older adults.
West, Robin L; Hastings, Erin C
2011-12-01
Memory training has often been supported as a potential means to improve performance for older adults. Less often studied are the characteristics of trainees that benefit most from training. Using a self-regulatory perspective, the current project examined a latent growth curve model to predict training-related gains for middle-aged and older adult trainees from individual differences (e.g., education), information processing skills (strategy use) and self-regulatory factors such as self-efficacy, control, and active engagement in training. For name recall, a model including strategy usage and strategy change as predictors of memory gain, along with self-efficacy and self-efficacy change, showed comparable fit to a more parsimonious model including only self-efficacy variables as predictors. The best fit to the text recall data was a model focusing on self-efficacy change as the main predictor of memory change, and that model showed significantly better fit than a model also including strategy usage variables as predictors. In these models, overall performance was significantly predicted by age and memory self-efficacy, and subsequent training-related gains in performance were best predicted directly by change in self-efficacy (text recall), or indirectly through the impact of active engagement and self-efficacy on gains (name recall). These results underscore the benefits of targeting self-regulatory factors in intervention programs designed to improve memory skills.
A Quadriparametric Model to Describe the Diversity of Waves Applied to Hormonal Data.
Abdullah, Saman; Bouchard, Thomas; Klich, Amna; Leiva, Rene; Pyper, Cecilia; Genolini, Christophe; Subtil, Fabien; Iwaz, Jean; Ecochard, René
2018-05-01
Even in normally cycling women, hormone level shapes may widely vary between cycles and between women. Over decades, finding ways to characterize and compare cycle hormone waves was difficult and most solutions, in particular polynomials or splines, do not correspond to physiologically meaningful parameters. We present an original concept to characterize most hormone waves with only two parameters. The modelling attempt considered pregnanediol-3-alpha-glucuronide (PDG) and luteinising hormone (LH) levels in 266 cycles (with ultrasound-identified ovulation day) in 99 normally fertile women aged 18 to 45. The study searched for a convenient wave description process and carried out an extended search for the best fitting density distribution. The highly flexible beta-binomial distribution offered the best fit of most hormone waves and required only two readily available and understandable wave parameters: location and scale. In bell-shaped waves (e.g., PDG curves), early peaks may be fitted with a low location parameter and a low scale parameter; plateau shapes are obtained with higher scale parameters. I-shaped, J-shaped, and U-shaped waves (sometimes the shapes of LH curves) may be fitted with high scale parameter and, respectively, low, high, and medium location parameter. These location and scale parameters will be later correlated with feminine physiological events. Our results demonstrate that, with unimodal waves, complex methods (e.g., functional mixed effects models using smoothing splines, second-order growth mixture models, or functional principal-component- based methods) may be avoided. The use, application, and, especially, result interpretation of four-parameter analyses might be advantageous within the context of feminine physiological events. Schattauer GmbH.
UTM: Universal Transit Modeller
NASA Astrophysics Data System (ADS)
Deeg, Hans J.
2014-12-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution
NASA Astrophysics Data System (ADS)
Zhao, X.; Suganuma, Y.; Fujii, M.
2017-12-01
Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.
Measuring the dependence of the decay curve on the electron energy deposit in NaI(Tl)
NASA Astrophysics Data System (ADS)
Choong, W.-S.; Bizarri, G.; Cherepy, N. J.; Hull, G.; Moses, W. W.; Payne, S. A.
2011-08-01
We report on the first measurement of the decay times of NaI(Tl) as a function of the deposited electron energy. It has been suggested that the decay curve depends on the ionization density, which is correlated with the electron energy deposit in the scintillator. The ionization creates excitation states, which can decay radiatively and non-radiatively through a number of competing processes. As a result, the rate at which the excitation decays depends on the ionization density. A measurement of the decay curve as a function of the ionization density will allow us to probe the kinetic rates of the competing processes. The Scintillator Light Yield Non-proportionality Characterization Instrument (SLYNCI) measures the electron response of scintillators utilizing fast sampling ADCs to digitize the raw signals from the detectors, and so can provide a measurement of the light pulse shape from the scintillator. Using data collected with the SLYNCI instrument, the intrinsic scintillation profile is extracted on an event-by-event basis by deconvolving the raw signal with the impulse response of the system. Scintillation profiles with the same electron energy deposit are summed to obtain decay curves as a function of the deposited electron energy. The decay time constants are obtained by fitting the decay curves with a two-component exponential decay. While a slight dependence of the decay time constants on the electron energy deposit is observed, the results are not statistically significant.
Classification of resistance to passive motion using minimum probability of error criterion.
Chan, H C; Manry, M T; Kondraske, G V
1987-01-01
Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.
NASA Technical Reports Server (NTRS)
Welker, Jean Edward
1991-01-01
Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.
Cai, Jing; Li, Shan; Zhang, Haixin; Zhang, Shuoxin; Tyree, Melvin T
2014-01-01
Vulnerability curves (VCs) generally can be fitted to the Weibull equation; however, a growing number of VCs appear to be recalcitrant, that is, deviate from a Weibull but seem to fit dual Weibull curves. We hypothesize that dual Weibull curves in Hippophae rhamnoides L. are due to different vessel diameter classes, inter-vessel hydraulic connections or vessels versus fibre tracheids. We used dye staining techniques, hydraulic measurements and quantitative anatomy measurements to test these hypotheses. The fibres contribute 1.3% of the total stem conductivity, which eliminates the hypothesis that fibre tracheids account for the second Weibull curve. Nevertheless, the staining pattern of vessels and fibre tracheids suggested that fibres might function as a hydraulic bridge between adjacent vessels. We also argue that fibre bridges are safer than vessel-to-vessel pits and put forward the concept as a new paradigm. Hence, we tentatively propose that the first Weibull curve may be accounted by vessels connected to each other directly by pit fields, while the second Weibull curve is associated with vessels that are connected almost exclusively by fibre bridges. Further research is needed to test the concept of fibre bridge safety in species that have recalcitrant or normal Weibull curves. © 2013 John Wiley & Sons Ltd.
Hencky's model for elastomer forming process
NASA Astrophysics Data System (ADS)
Oleinikov, A. A.; Oleinikov, A. I.
2016-08-01
In the numerical simulation of elastomer forming process, Henckys isotropic hyperelastic material model can guarantee relatively accurate prediction of strain range in terms of large deformations. It is shown, that this material model prolongate Hooke's law from the area of infinitesimal strains to the area of moderate ones. New representation of the fourth-order elasticity tensor for Hencky's hyperelastic isotropic material is obtained, it possesses both minor symmetries, and the major symmetry. Constitutive relations of considered model is implemented into MSC.Marc code. By calculating and fitting curves, the polyurethane elastomer material constants are selected. Simulation of equipment for elastomer sheet forming are considered.
Comprehensive Study of Plasma-Wall Sheath Transport Phenomena
2016-10-26
function of the applied thermo-mechanical stress. An experiment was designed to test whether and how the process of plasma erosion might depend on ...of exposed surface, a, b) pretest height and laser image, c, d) post - test height and laser image. For the following analysis, a curve fit of the...normal to the ion beam. However, even with a one -dimensional simulation, features of a similar depth and profile to the post - test surface develop
Characterizing the Constitutive Properties of AA7075 for Hot Forming
NASA Astrophysics Data System (ADS)
Omer, K.; Kim, S.; Butcher, C.; Worswick, M.
2017-09-01
The work presented herein investigates the constitutive properties of AA7075 as it undergoes a hot stamping/die quenching process. Tensile specimens were solutionized inside a heated furnace set to 470°C. Once solutionized, the samples were quenched to an intermediate temperature using a vortex air chiller at a minimum rate of 52°C/s. Tensile tests were conducted at steady state temperatures of 470, 400, 300, 200, 115 and 25°C. This solutionizing and subsequent quenching process replicated the temperature cycle and quench rates representative of a die quenching operation. The results of the tensile test were analyzed with digital imaging correlation using an area reduction approach. The area reduction approach approximated the cross-sectional area of the tensile specimen as it necked. The approach allowed for the true stress-strain response to be calculated well past the initial necking point. The resulting true stress-strain curves showed that the AA7075 samples experienced almost no hardening at 470°C. As steady state temperature decreased, the rate of hardening as well as overall material strength increased. The true stress strain curves were fit to a modified version of the extended Voce constitutive model. The resulting fits can be used in a finite element model to predict the behaviour of an AA7075 blank during a die quenching operation.
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tao; Li, Cheng; Huang, Can
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
NASA Astrophysics Data System (ADS)
Cai, Gaoshen; Wu, Chuanyu; Gao, Zepu; Lang, Lihui; Alexandrov, Sergei
2018-05-01
An elliptical warm/hot sheet bulging test under different temperatures and pressure rates was carried out to predict Al-alloy sheet forming limit during warm/hot sheet hydroforming. Using relevant formulas of ultimate strain to calculate and dispose experimental data, forming limit curves (FLCS) in tension-tension state of strain (TTSS) area are obtained. Combining with the basic experimental data obtained by uniaxial tensile test under the equivalent condition with bulging test, complete forming limit diagrams (FLDS) of Al-alloy are established. Using a quadratic polynomial curve fitting method, material constants of fitting function are calculated and a prediction model equation for sheet metal forming limit is established, by which the corresponding forming limit curves in TTSS area can be obtained. The bulging test and fitting results indicated that the sheet metal FLCS obtained were very accurate. Also, the model equation can be used to instruct warm/hot sheet bulging test.
Ding, Tao; Li, Cheng; Huang, Can; ...
2017-01-09
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Yang, Xiaolan; Hu, Xiaolei; Xu, Bangtian; Wang, Xin; Qin, Jialin; He, Chenxiong; Xie, Yanling; Li, Yuanli; Liu, Lin; Liao, Fei
2014-06-17
A fluorometric titration approach was proposed for the calibration of the quantity of monoclonal antibody (mcAb) via the quench of fluorescence of tryptophan residues. It applied to purified mcAbs recognizing tryptophan-deficient epitopes, haptens nonfluorescent at 340 nm under the excitation at 280 nm, or fluorescent haptens bearing excitation valleys nearby 280 nm and excitation peaks nearby 340 nm to serve as Förster-resonance-energy-transfer (FRET) acceptors of tryptophan. Titration probes were epitopes/haptens themselves or conjugates of nonfluorescent haptens or tryptophan-deficient epitopes with FRET acceptors of tryptophan. Under the excitation at 280 nm, titration curves were recorded as fluorescence specific for the FRET acceptors or for mcAbs at 340 nm. To quantify the binding site of a mcAb, a universal model considering both static and dynamic quench by either type of probes was proposed for fitting to the titration curve. This was easy for fitting to fluorescence specific for the FRET acceptors but encountered nonconvergence for fitting to fluorescence of mcAbs at 340 nm. As a solution, (a) the maximum of the absolute values of first-order derivatives of a titration curve as fluorescence at 340 nm was estimated from the best-fit model for a probe level of zero, and (b) molar quantity of the binding site of the mcAb was estimated via consecutive fitting to the same titration curve by utilizing such a maximum as an approximate of the slope for linear response of fluorescence at 340 nm to quantities of the mcAb. This fluorometric titration approach was proved effective with one mcAb for six-histidine and another for penicillin G.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
Motulsky, Harvey J; Brown, Ronald E
2006-01-01
Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949
Application of separable parameter space techniques to multi-tracer PET compartment modeling
Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
Mizuno, Ju; Mohri, Satoshi; Yokoyama, Takeshi; Otsuji, Mikiya; Arita, Hideko; Hanaoka, Kazuo
2017-02-01
Varying temperature affects cardiac systolic and diastolic function and the left ventricular (LV) pressure-time curve (PTC) waveform that includes information about LV inotropism and lusitropism. Our proposed half-logistic (h-L) time constants obtained by fitting using h-L functions for four segmental phases (Phases I-IV) in the isovolumic LV PTC are more useful indices for estimating LV inotropism and lusitropism during contraction and relaxation periods than the mono-exponential (m-E) time constants at normal temperature. In this study, we investigated whether the superiority of the goodness of h-L fits remained even at hypothermia and hyperthermia. Phases I-IV in the isovolumic LV PTCs in eight excised, cross-circulated canine hearts at 33, 36, and 38 °C were analyzed using h-L and m-E functions and the least-squares method. The h-L and m-E time constants for Phases I-IV significantly shortened with increasing temperature. Curve fitting using h-L functions was significantly better than that using m-E functions for Phases I-IV at all temperatures. Therefore, the superiority of the goodness of h-L fit vs. m-E fit remained at all temperatures. As LV inotropic and lusitropic indices, temperature-dependent h-L time constants could be more useful than m-E time constants for Phases I-IV.
NASA Astrophysics Data System (ADS)
Saquet, E.; Emelyanov, N.; Robert, V.; Arlot, J.-E.; Anbazhagan, P.; Baillié, K.; Bardecker, J.; Berezhnoy, A. A.; Bretton, M.; Campos, F.; Capannoli, L.; Carry, B.; Castet, M.; Charbonnier, Y.; Chernikov, M. M.; Christou, A.; Colas, F.; Coliac, J.-F.; Dangl, G.; Dechambre, O.; Delcroix, M.; Dias-Oliveira, A.; Drillaud, C.; Duchemin, Y.; Dunford, R.; Dupouy, P.; Ellington, C.; Fabre, P.; Filippov, V. A.; Finnegan, J.; Foglia, S.; Font, D.; Gaillard, B.; Galli, G.; Garlitz, J.; Gasmi, A.; Gaspar, H. S.; Gault, D.; Gazeas, K.; George, T.; Gorda, S. Y.; Gorshanov, D. L.; Gualdoni, C.; Guhl, K.; Halir, K.; Hanna, W.; Henry, X.; Herald, D.; Houdin, G.; Ito, Y.; Izmailov, I. S.; Jacobsen, J.; Jones, A.; Kamoun, S.; Kardasis, E.; Karimov, A. M.; Khovritchev, M. Y.; Kulikova, A. M.; Laborde, J.; Lainey, V.; Lavayssiere, M.; Le Guen, P.; Leroy, A.; Loader, B.; Lopez, O. C.; Lyashenko, A. Y.; Lyssenko, P. G.; Machado, D. I.; Maigurova, N.; Manek, J.; Marchini, A.; Midavaine, T.; Montier, J.; Morgado, B. E.; Naumov, K. N.; Nedelcu, A.; Newman, J.; Ohlert, J. M.; Oksanen, A.; Pavlov, H.; Petrescu, E.; Pomazan, A.; Popescu, M.; Pratt, A.; Raskhozhev, V. N.; Resch, J.-M.; Robilliard, D.; Roschina, E.; Rothenberg, E.; Rottenborn, M.; Rusov, S. A.; Saby, F.; Saya, L. F.; Selvakumar, G.; Signoret, F.; Slesarenko, V. Y.; Sokov, E. N.; Soldateschi, J.; Sonka, A.; Soulie, G.; Talbot, J.; Tejfel, V. G.; Thuillot, W.; Timerson, B.; Toma, R.; Torsellini, S.; Trabuco, L. L.; Traverse, P.; Tsamis, V.; Unwin, M.; Abbeel, F. Van Den; Vandenbruaene, H.; Vasundhara, R.; Velikodsky, Y. I.; Vienne, A.; Vilar, J.; Vugnon, J.-M.; Wuensche, N.; Zeleny, P.
2018-03-01
During the 2014-2015 mutual events season, the Institut de Mécanique Céleste et de Calcul des Éphémérides (IMCCE), Paris, France, and the Sternberg Astronomical Institute (SAI), Moscow, Russia, led an international observation campaign to record ground-based photometric observations of Galilean moon mutual occultations and eclipses. We focused on processing the complete photometric observations data base to compute new accurate astrometric positions. We used our method to derive astrometric positions from the light curves of the events. We developed an accurate photometric model of mutual occultations and eclipses, while correcting for the satellite albedos, Hapke's light scattering law, the phase effect, and the limb darkening. We processed 609 light curves, and we compared the observed positions of the satellites with the theoretical positions from IMCCE NOE-5-2010-GAL satellite ephemerides and INPOP13c planetary ephemeris. The standard deviation after fitting the light curve in equatorial positions is ±24 mas, or 75 km at Jupiter. The rms (O-C) in equatorial positions is ±50 mas, or 150 km at Jupiter.
NASA Astrophysics Data System (ADS)
Han, Ming
In this dissertation, detailed and systematic theoretical and experimental study of low-finesse extrinsic Fabry-Perot interferometric (EFPI) fiber optic sensors together with their signal processing methods for white-light systems are presented. The work aims to provide a better understanding of the operational principle of EFPI fiber optic sensors, and is useful and important in the design, optimization, fabrication and application of single mode fiber(SMF) EFPI (SMF-EFPI) and multimode fiber (MMF) EFPI (MMF-EFPI) sensor systems. The cases for SMF-EFPI and MMF-EFPI sensors are separately considered. In the analysis of SMF-EFPI sensors, the light transmitted in the fiber is approximated by a Gaussian beam and the obtained spectral transfer function of the sensors includes an extra phase shift due to the light coupling in the fiber end-face. This extra phase shift has not been addressed by previous researchers and is of great importance for high accuracy and high resolution signal processing of white-light SMF-EFPI systems. Fringe visibility degradation due to gap-length increase and sensor imperfections is studied. The results indicate that the fringe visibility of a SMF-EFPI sensor is relatively insensitive to the gap-length change and sensor imperfections. Based on the spectral fringe pattern predicated by the theory of SMF-EFPI sensors, a novel curve fitting signal processing method (Type 1 curve-fitting method) is presented for white-light SMF-EFPI sensor systems. Other spectral domain signal processing methods including the wavelength-tracking, the Type 2-3 curve fitting, Fourier transform, and two-point interrogation methods are reviewed and systematically analyzed. Experiments were carried out to compare the performances of these signal processing methods. The results have shown that the Type 1 curve fitting method achieves high accuracy, high resolution, large dynamic range, and the capability of absolute measurement at the same time, while others either have less resolution, or are not capable of absolute measurement. Previous mathematical models for MMF-EFPI sensors are all based on geometric optics; therefore their applications have many limitations. In this dissertation, a modal theory is developed that can be used in any situations and is more accurate. The mathematical description of the spectral fringes of MMF-EFPI sensors is obtained by the modal theory. Effect on the fringe visibility of system parameters, including the sensor head structure, the fiber parameters, and the mode power distribution in the MMF of the MMF-EFPI sensors, is analyzed. Experiments were carried out to validate the theory. Fundamental mechanism that causes the degradation of the fringe visibility in MMF-EFPI sensors are revealed. It is shown that, in some situations at which the fringe visibility is important and difficult to achieve, a simple method of launching the light into the MMF-EFPI sensor system from the output of a SMF could be used to improve the fringe visibility and to ease the fabrication difficulties of MMF-EFPI sensors. Signal processing methods that are well-understood in white-light SMF-EFPI sensor systems may exhibit new aspects when they are applied to white-light MMF-EFPI sensor systems. This dissertation reveals that the variations of mode power distribution (MPD) in the MMF could cause phase variations of the spectral fringes from a MMF-EFPI sensor and introduce measurement errors for a signal processing method in which the phase information is used. This MPD effect on the wavelength-tracking method in white-light MMF-EFPI sensors is theoretically analyzed. The fringe phases changes caused by MPD variations were experimentally observed and thus the MFD effect is validated.
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
Enhancements of Bayesian Blocks; Application to Large Light Curve Databases
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2015-01-01
Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).
a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset
NASA Astrophysics Data System (ADS)
Zhou, Y. K.
2018-05-01
Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Coral-Ghanem, Cleusa; Alves, Milton Ruiz
2008-01-01
To evaluate the clinical performance of Monocurve and Bicurve (Soper-McGuire design) rigid gas-permeable contact lens fitting in patients with keratoconus. A prospective and randomized comparative clinical trial was conducted with a minimum follow-up of six months in two groups of 63 patients. One group was fitted with Monocurve contact lenses and the other with Bicurve Soper-McGuire design. Study variables included fluoresceinic pattern of lens-to-cornea fitting relationship, location and morphology of the cone, presence and degree of punctate keratitis and other corneal surface alterations, topographic changes, visual acuity for distance corrected with contact lenses and survival analysis for remaining with the same contact lens design during the study. During the follow-up there was a decrease in the number of eyes with advanced and central cones fitted with Monocurve lenses, and an increase in those fitted with Soper-McGuire design. In the Monocurve group, a flattening of both the steepest and the flattest keratometric curve was observed. In the Soper-McGuire group, a steepening of the flattest keratometric curve and a flattening of the steepest keratometric curve were observed. There was a decrease in best-corrected visual acuity with contact lens in the Monocurve group. Survival analysis for the Monocurve lens was 60.32% and for the Soper-McGuire was 71.43% at a mean follow-up of six months. This study showed that due to the changes observed in corneal topography, the same contact lens design did not provide an ideal fitting for all patients during the follow-up period. The Soper-McGuire lenses had a better performance than the Monocurve lenses in advanced and central keratoconus.
An hourglass model for the flare of HST-1 in M87
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wen-Po; Zhao, Guang-Yao; Chen, Yong Jun
To explain the multi-wavelength light curves (from radio to X-ray) of HST-1 in the M87 jet, we propose an hourglass model that is a modified two-zone system of Tavecchio and Ghisellini (hereafter TG08): a slow hourglass-shaped or Laval-nozzle-shaped layer connected by two revolving exponential surfaces surrounding a fast spine through which plasma blobs flow. Based on the conservation of magnetic flux, the magnetic field changes along the axis of the hourglass. We adopt the result of TG08—the high-energy emission from GeV to TeV can be produced through inverse Compton by the two-zone system, and the photons from radio to X-raymore » are mainly radiated by the fast inner zone system. Here, we only discuss the light curves of the fast inner blob from radio to X-ray. When a compressible blob travels down the axis of the first bulb in the hourglass, because of magnetic flux conservation, its cross section experiences an adiabatic compression process, which results in particle acceleration and the brightening of HST-1. When the blob moves into the second bulb of the hourglass, because of magnetic flux conservation, the dimming of the knot occurs along with an adiabatic expansion of its cross section. A similar broken exponential function could fit the TeV peaks in M87, which may imply a correlation between the TeV flares of M87 and the light curves from radio to X-ray in HST-1. The Very Large Array (VLA) 22 GHz radio light curve of HST-1 verifies our prediction based on the model fit to the main peak of the VLA 15 GHz radio one.« less
Design data for radars based on 13.9 GHz Skylab scattering coefficient measurements
NASA Technical Reports Server (NTRS)
Moore, R. K. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Measurements made at 13.9 GHz with the radar scatterometer on Skylab have been combined to produce median curves of the variation of scattering coefficient with angle of incidence out to 45 deg. Because of the large number of observations, and the large area averaged for each measured data point, these curves may be used as a new design base for radars. A reasonably good fit at larger angles is obtained using the theoretical expression based on an exponential height correlation function and also using Lambert's law. For angles under 10 deg, a different fit based on the exponential correlation function, and a fit based on geometric optics expressions are both reasonably valid.
Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.
VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T
2017-06-01
The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A
2011-05-01
Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE
Adsorption and desorption for dynamics transport of hexavalent chromium Cr(Ⅵ) in soil column
NASA Astrophysics Data System (ADS)
Tong, J.
2017-12-01
Batch experiments have been carried out to study the adsorption of heavy metals in soils, and the migration and transformation of hexavalent chromium Cr(Ⅵ) in the soil of a vegetable base were studied by dynamic adsorption and desorption soil column experiments. The aim of this study was to investigate the effect of initial concentration and pH value on the adsorption process of Cr(Ⅵ). Breakthrough curve were used to evaluate the capacity of Cr(Ⅵ) adsorption in soil columns. The results show that the higher the initial concentration, the worse the adsorption capacity of Cr(Ⅵ). The adsorption of Cr(Ⅵ) was strongly sensitive to pH value. The capacity of Cr(Ⅵ) adsorption is maximized at very low pH value. This may be due to changes in pH that cause a series of complex reactions in Cr(Ⅵ). In a strongly acidic environment, the reaction of Cr(Ⅵ) with hydrogen ions is accompanied by the formation of Cr3+, which reacts with the soil free iron-aluminum oxide to produce hydroxide in the soil. The results of the desorption experiments indicate that Cr(Ⅵ) is more likely to leach from this soil, but if the eluent is strong acid solution, the leaching process will be slow and persistent. The program CXTFIT was used to fit the breakthrough curve to estimate parameters. The results of the calculation of the dispersion coefficient (D) can be obtained by this program. The two-site model fit the breakthrough curve data of Cr(Ⅵ) well, and the parameters calculated by CXTFIT can be used to explain the behavior of Cr(Ⅵ) migration and transformation in soil columns. When pH=2, the retardation factor (R) reach at 79.71 while the value of the R is generally around 10 in other experiments. The partitioning coefficient β shows that more than half of the adsorption sites are rate-limited in this adsorption process and non-equilibrium effects the Cr(Ⅵ) transport process in this soil.
Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures
Bayer, Jason D.
2014-01-01
We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911
NASA Astrophysics Data System (ADS)
Askarimarnani, Sara; Willgoose, Garry; Fityus, Stephen
2017-04-01
Coal seam gas (CSG) is a form of natural gas that occurs in some coal seams. Coal seams have natural fractures with dual-porosity systems and low permeability. In the CSG industry, hydraulic fracturing is applied to increase the permeability and extract the gas more efficiently from the coal seam. The industry claims that it can design fracking patterns. Whether this is true or not, the public (and regulators) requires assurance that once a well has been fracked that the fracking has occurred according to plan and that the fracked well is safe. Thus defensible post-fracking testing methodologies for gas generating wells are required. In 2009 a fracked well HB02, owned by AGL, near Broke, NSW, Australia was subjected to "traditional" water pump-testing as part of this assurance process. Interpretation with well Type Curves and simple single phase (i.e. only water, no gas) highlighted deficiencies in traditional water well approaches with a systemic deviation from the qualitative characteristic of well drawdown curves (e.g. concavity versus convexity of drawdown with time). Accordingly a multiphase (i.e. water and methane) model of the well was developed and compared with the observed data. This paper will discuss the results of this multiphase testing using the TOUGH2 model and its EOS7C constitutive model. A key objective was to test a methodology, based on GLUE monte-carlo calibration technique, to calibrate the characteristics of the frack using the well test drawdown curve. GLUE involves a sensitivity analysis of how changes in the fracture properties change the well hydraulics through and analysis of the drawdown curve and changes in the cone of depression. This was undertaken by changing the native coal, fracture, and gas parameters to see how changing those parameters changed the match between simulations and the observed well drawdown. Results from the GLUE analysis show how much information is contained in the well drawdown curve for estimating field scale coal and gas generation properties, the fracture geometry, and the proponent characteristics. The results with the multiphase model show a better match to the drawdown than using a single phase model but the differences between the best fit drawdowns were small, and smaller than the difference between the best fit and field data. However, the parameters derived to generate these best fits for each model were very different. We conclude that while satisfactory fits with single phase groundwater models (e.g. MODFLOW, FEFLOW) can be achieved the parameters derived will not be realistic, with potential implications for drawdowns and water yields for gas field modelling. Multiphase models are thus required and we will discuss some of the limitations of TOUGH2 for the CSG problem.
Promoting convergence: The Phi spiral in abduction of mouse corneal behaviors
Rhee, Jerry; Nejad, Talisa Mohammad; Comets, Olivier; Flannery, Sean; Gulsoy, Eine Begum; Iannaccone, Philip; Foster, Craig
2015-01-01
Why do mouse corneal epithelial cells display spiraling patterns? We want to provide an explanation for this curious phenomenon by applying an idealized problem solving process. Specifically, we applied complementary line-fitting methods to measure transgenic epithelial reporter expression arrangements displayed on three mature, live enucleated globes to clarify the problem. Two prominent logarithmic curves were discovered, one of which displayed the ϕ ratio, an indicator of an optimal configuration in phyllotactic systems. We then utilized two different computational approaches to expose our current understanding of the behavior. In one procedure, which involved an isotropic mechanics-based finite element method, we successfully produced logarithmic spiral curves of maximum shear strain based pathlines but computed dimensions displayed pitch angles of 35° (ϕ spiral is ∼17°), which was altered when we fitted the model with published measurements of coarse collagen orientations. We then used model-based reasoning in context of Peircean abduction to select a working hypothesis. Our work serves as a concise example of applying a scientific habit of mind and illustrates nuances of executing a common method to doing integrative science. © 2014 Wiley Periodicals, Inc. Complexity 20: 22–38, 2015 PMID:25755620
Howard, Marc W.; Bessette-Symons, Brandy; Zhang, Yaofei; Hoyer, William J.
2006-01-01
Younger and older adults were tested on recognition memory for pictures. The Yonelinas high threshold (YHT) model, a formal implementation of two-process theory, fit the response distribution data of both younger and older adults significantly better than a normal unequal variance signal detection model. Consistent with this finding, non-linear zROC curves were obtained for both groups. Estimates of recollection from the YHT model were significantly higher for younger than older adults. This deficit was not a consequence of a general decline in memory; older adults showed comparable overall accuracy and in fact a non-significant increase in their familiarity scores. Implications of these results for theories of recognition memory and the mnemonic deficit associated with aging are discussed. PMID:16594795
Comparison of serum from gastric cancer patients and from healthy persons using FTIR spectroscopy
NASA Astrophysics Data System (ADS)
Sheng, Daping; Wu, Yican; Wang, Xin; Huang, Dake; Chen, Xianliang; Liu, Xingcun
2013-12-01
Since serum can reflect human beings' physiological and pathological conditions, FTIR spectroscopy was used to compare gastric cancer patients' serum with healthy persons' serum in this study. The H2959/H2931, H1646/H1550, H1314/H1243, H1453/H1400 and H1080/H1550 ratios were calculated, among these ratios, the H2959/H2931 ratio might be a standard for distinguishing gastric cancer patients from healthy persons. Then curve fitting was processed using Gaussian curves in the 1140-1000 cm-1 region, and the result showed that the RNA/DNA ratios of gastric cancer patients' serum were obviously lower than those of healthy persons' serum. The results suggest that FTIR spectroscopy may be a potentially useful tool for diagnosis of gastric cancer.
Raster and vector processing for scanned linework
Greenlee, David D.
1987-01-01
An investigation of raster editing techniques, including thinning, filling, and node detecting, was performed by using specialized software. The techniques were based on encoding the state of the 3-by-3 neighborhood surrounding each pixel into a single byte. A prototypical method for converting the edited raster linkwork into vectors was also developed. Once vector representations of the lines were formed, they were formatted as a Digital Line Graph, and further refined by deletion of nonessential vertices and by smoothing with a curve-fitting technique.
Universal approach to analysis of cavitation and liquid-impingement erosion data
NASA Technical Reports Server (NTRS)
Rao, P. V.; Young, S. G.
1982-01-01
Cavitation erosion experimental data was analyzed by using normalization and curve-fitting techniques. Data were taken from experiments on several materials tested in both a rotating disk device and a magnetostriction apparatus. Cumulative average volume loss rate and time data were normalized relative to the peak erosion rate and the time to peak erosion rate, respectively. From this process a universal approach was derived that can include data on specific materials from different test devices for liquid impingement and cavitation erosion studies.
Li, Dong-Sheng; Xu, Hui-Mian; Han, Chun-Qi; Li, Ya-Ming
2010-01-01
AIM: To determine the effect of three digestive tract reconstruction procedures on pouch function, after radical surgery undertaken because of gastric cancer, as assessed by radionuclide dynamic imaging. METHODS: As a measure of the reservoir function, with a designed diet containing technetium-99m (99mTc), the emptying time of the gastric substitute was evaluated using a 99mTc-labeled solid test meal. Immediately after the meal, the patient was placed in front of a γ camera in a supine position and the radioactivity was measured over the whole abdomen every minute. A frame image was obtained. The emptying sequences were recorded by the microprocessor and then stored on a computer disk. According to a computer processing system, the half-emptying actual curve and the fitting curve of food containing isotope in the detected region were depicted, and the half-emptying actual curves of the three reconstruction procedures were directly compared. RESULTS: Of the three reconstruction procedures, the half-emptying time of food containing isotope in the Dual Braun type esophagojejunal anastomosis procedure (51.86 ± 6.43 min) was far closer to normal, significantly better than that of the proximal gastrectomy orthotopic reconstruction (30.07 ± 15.77 min, P = 0.002) and P type esophagojejunal anastomosis (27.88 ± 6.07 min, P = 0.001) methods. The half-emptying actual curve and fitting curves for the Dual Braun type esophagojejunal anastomosis were fairly similar while those of the proximal gastrectomy orthotopic reconstruction and P type esophagojejunal anastomosis were obviously separated, which indicated bad food conservation in the reconstructed pouches. CONCLUSION: Dual Braun type esophagojejunal anastomosis is the most useful of the three procedures for improving food accommodation in patients with a pouch and can retard evacuation of solid food from the reconstructed pouch. PMID:20238408
Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies
NASA Astrophysics Data System (ADS)
Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.
2017-12-01
Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark matter models. We find that the pure baryonic model with fitted M/L ratios falls within the 1σ confidence level for 10 HSB and 2 LSB galaxies, at the price of growing the M/Ls on average by a factor of two, but the fits are inferior compared to the best-fitting dark matter model.
Mapping conduction velocity of early embryonic hearts with a robust fitting algorithm
Gu, Shi; Wang, Yves T; Ma, Pei; Werdich, Andreas A; Rollins, Andrew M; Jenkins, Michael W
2015-01-01
Cardiac conduction maturation is an important and integral component of heart development. Optical mapping with voltage-sensitive dyes allows sensitive measurements of electrophysiological signals over the entire heart. However, accurate measurements of conduction velocity during early cardiac development is typically hindered by low signal-to-noise ratio (SNR) measurements of action potentials. Here, we present a novel image processing approach based on least squares optimizations, which enables high-resolution, low-noise conduction velocity mapping of smaller tubular hearts. First, the action potential trace measured at each pixel is fit to a curve consisting of two cumulative normal distribution functions. Then, the activation time at each pixel is determined based on the fit, and the spatial gradient of activation time is determined with a two-dimensional (2D) linear fit over a square-shaped window. The size of the window is adaptively enlarged until the gradients can be determined within a preset precision. Finally, the conduction velocity is calculated based on the activation time gradient, and further corrected for three-dimensional (3D) geometry that can be obtained by optical coherence tomography (OCT). We validated the approach using published activation potential traces based on computer simulations. We further validated the method by adding artificially generated noise to the signal to simulate various SNR conditions using a curved simulated image (digital phantom) that resembles a tubular heart. This method proved to be robust, even at very low SNR conditions (SNR = 2-5). We also established an empirical equation to estimate the maximum conduction velocity that can be accurately measured under different conditions (e.g. sampling rate, SNR, and pixel size). Finally, we demonstrated high-resolution conduction velocity maps of the quail embryonic heart at a looping stage of development. PMID:26114034
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
Multi-Filter Photometric Analysis of Three β Lyrae-type Eclipsing Binary Stars
NASA Astrophysics Data System (ADS)
Gardner, T.; Hahs, G.; Gokhale, V.
2015-12-01
We present light curve analysis of three variable stars, ASAS J105855+1722.2, NSVS 5066754, and NSVS 9091101. These objects are selected from a list of β- Lyrae candidates published by Hoffman et al. (2008). Light curves are generated using data collected at the the 31-inch NURO telescope at the Lowell Observatory in Flagstaff, Arizona in three filters: Bessell B, V, and R. Additional observations were made using the 14-inch Meade telescope at the Truman State Observatory in Kirksville, Missouri using Baader R, G, and B filters. In this paper, we present the light curves for these three objects and generate a truncated eight-term Fourier fit to these light curves. We use the Fourier coefficients from this fit to confirm ASAS J105855+1722.2 and NSVS 5066754 as β Lyrae type systems, and NSVS 9091101 to possibly be a RR Lyrae-type system. We measure the O'Connell effect observed in two of these systems (ASAS J105855+1722.2 and NSVS 5066754), and quantify this effect by calculating the "Light Curve Asymmetry" (LCA) and the "O'Connell Effect Ratio" (OER).
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
NASA Astrophysics Data System (ADS)
Zou, Yuan; Shen, Tianxing
2013-03-01
Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly
Yao, Jian; Levine, Judah; Weiss, Marc
2015-01-01
The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is “day boundary discontinuity,” which has been studied extensively and can be solved by multiple methods [1–8]. The other category of discontinuity, called “anomaly boundary discontinuity (anomaly-BD),” comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as “jamming”, can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer. PMID:26958451
NASA Astrophysics Data System (ADS)
Cicala, G.; Cristaldi, G.; Recca, G.; Ziegmann, G.; ElSabbagh, A.; Dickert, M.
2008-08-01
The aim of the present research was to investigate the replacement of glass fibers with hemp fibers for applications in the piping industry. The choice of hemp fibers was mainly related to the needs, expressed by some companies operating in this sector, for cost reduction without adversely reducing the performances of the pipes. Two processing techniques, namely hand lay up and light RTM, were evaluated. The pipe selected for the study was a curved fitting (90°) flanged at both ends. The fitting must withstand an internal pressure of 10 bar and the presence of acid aqueous solutions. The original lay-up used to build the pipe is a sequence of C-glass, glass mats and glass fabric. Commercial epoxy vinyl ester resin was used as thermoset matrix. Hemp fibers mats were selected as potential substitute of glass fibers mats because of their low cost and ready availability from different commercial sources. The data obtained from the mechanical characterization were used to define a favorable design of the pipe using hemp mats as internal layer. The proposed design for the fittings allowed for a cost reduction of about 24% and a weight saving of about 23% without any drawback in terms of the final performances. The light RTM techniques was developed on purpose for the manufacturing of the curved pipe. The comparison between hand lay up and light RTM evidenced a substantial cost reduction when light RTM was used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cicala, G.; Cristaldi, G.; Recca, G.
2008-08-28
The aim of the present research was to investigate the replacement of glass fibers with hemp fibers for applications in the piping industry. The choice of hemp fibers was mainly related to the needs, expressed by some companies operating in this sector, for cost reduction without adversely reducing the performances of the pipes. Two processing techniques, namely hand lay up and light RTM, were evaluated. The pipe selected for the study was a curved fitting (90 deg.) flanged at both ends. The fitting must withstand an internal pressure of 10 bar and the presence of acid aqueous solutions. The originalmore » lay-up used to build the pipe is a sequence of C-glass, glass mats and glass fabric. Commercial epoxy vinyl ester resin was used as thermoset matrix.Hemp fibers mats were selected as potential substitute of glass fibers mats because of their low cost and ready availability from different commercial sources. The data obtained from the mechanical characterization were used to define a favorable design of the pipe using hemp mats as internal layer. The proposed design for the fittings allowed for a cost reduction of about 24% and a weight saving of about 23% without any drawback in terms of the final performances.The light RTM techniques was developed on purpose for the manufacturing of the curved pipe. The comparison between hand lay up and light RTM evidenced a substantial cost reduction when light RTM was used.« less
Prospects for Chronological Studies of Martian Rocks and Soils
NASA Technical Reports Server (NTRS)
Nyquist, L. E.; Shih, C-Y.; Reese, Y. D.
2008-01-01
Chronological information about Martian processes comes from two sources: Crater-frequency studies and laboratory studies of Martian meteorites. Each has limitations that could be overcome by studies of returned Martian rocks and soils. Chronology of Martian volcanism: The currently accepted chronology of Martian volcanic surfaces relies on crater counts for different Martian stratigraphic units [1]. However, there is a large inherent uncertainty for intermediate ages near 2 Ga ago. The effect of differing preferences for Martian cratering chronologies [1] is shown in Fig. 1. Stoeffler and Ryder [2] summarized lunar chronology, upon which Martian cratering chronology is based. Fig. 2 shows a curve fit to their data, and compares to it a corresponding lunar curve from [3]. The radiometric ages of some lunar and Martian meteorites as well as the crater-count delimiters for Martian epochs [4] also are shown for comparison to the craterfrequency curves. Scaling the Stoeffler-Ryder curve by a Mars/Moon factor of 1.55 [5] places Martian shergottite ages into the Early Amazonian to late Hesperian epochs, whereas using the lunar curve of [3] and a Mars/Moon factor 1 consigns the shergottites to the Middle-to-Late Amazonian, a less probable result. The problem is worsened if a continually decreasing cratering rate since 3 Ga ago is accepted [6]. We prefer the adjusted St ffler-Ryder curve because it gives better agreement with the meteorite ages (Fig.
Possible Transit Timing Variations of the TrES-3 Planetary System
NASA Astrophysics Data System (ADS)
Jiang, Ing-Guey; Yeh, Li-Chin; Thakur, Parijat; Wu, Yu-Ting; Chien, Ping; Lin, Yi-Ling; Chen, Hong-Yu; Hu, Juei-Hwa; Sun, Zhao; Ji, Jianghui
2013-03-01
Five newly observed transit light curves of the TrES-3 planetary system are presented. Together with other light-curve data from the literature, 23 transit light curves in total, which cover an overall timescale of 911 epochs, have been analyzed through a standard procedure. From these observational data, the system's orbital parameters are determined and possible transit timing variations (TTVs) are investigated. Given that a null TTV produces a fit with reduced χ2 = 1.52, our results agree with previous work, that TTVs might not exist in these data. However, a one-frequency oscillating TTV model, giving a fit with a reduced χ2 = 0.93, does possess a statistically higher probability. It is thus concluded that future observations and dynamical simulations for this planetary system will be very important.
NASA Astrophysics Data System (ADS)
Pang, Liping; Goltz, Mark; Close, Murray
2003-01-01
In this note, we applied the temporal moment solutions of [Das and Kluitenberg, 1996. Soil Sci. Am. J. 60, 1724] for one-dimensional advective-dispersive solute transport with linear equilibrium sorption and first-order degradation for time pulse sources to analyse soil column experimental data. Unlike most other moment solutions, these solutions consider the interplay of degradation and sorption. This permits estimation of a first-order degradation rate constant using the zeroth moment of column breakthrough data, as well as estimation of the retardation factor or sorption distribution coefficient of a degrading solute using the first moment. The method of temporal moment (MOM) formulae was applied to analyse breakthrough data from a laboratory column study of atrazine, hexazinone and rhodamine WT transport in volcanic pumice sand, as well as experimental data from the literature. Transport and degradation parameters obtained using the MOM were compared to parameters obtained by fitting breakthrough data from an advective-dispersive transport model with equilibrium sorption and first-order degradation, using the nonlinear least-square curve-fitting program CXTFIT. The results derived from using the literature data were also compared with estimates reported in the literature using different equilibrium models. The good agreement suggests that the MOM could provide an additional useful means of parameter estimation for transport involving equilibrium sorption and first-order degradation. We found that the MOM fitted breakthrough curves with tailing better than curve fitting. However, the MOM analysis requires complete breakthrough curves and relatively frequent data collection to ensure the accuracy of the moments obtained from the breakthrough data.
Robinson, B F; Mervis, C B
1998-03-01
The early lexical and grammatical development of 1 male child is examined with growth curves and dynamic-systems modeling procedures. Lexical-development described a pattern of logistic growth (R2 = .98). Lexical and plural development shared the following characteristics: Plural growth began only after a threshold was reached in vocabulary size; lexical growth slowed as plural growth increased. As plural use reached full mastery, lexical growth began again to increase. It was hypothesized that a precursor model (P. van Geert, 1991) would fit these data. Subsequent testing indicated that the precursor model, modified to incorporate brief yet intensive plural growth, provided a suitable fit. The value of the modified precursor model for the explication of processes implicated in language development is discussed.
Fitting the post-keratoplasty cornea with hydrogel lenses.
Katsoulos, Costas; Nick, Vasileiou; Lefteris, Karageorgiadis; Theodore, Mousafeiropoulos
2009-02-01
We report two cases who have undergone penetrating keratoplasty (three eyes total), and who were fitted with hydrogel lenses. In the first case, a 28-year-old male presented with an interest in contact lens fitting. He had undergone corneal transplantation in both eyes, about 5 years ago. After topographies and trial fitting were performed, it was decided to be fitted with reverse geometry hydrogel lenses, due to the globular geometry of the cornea, the resultant instability of RGPs, and personal preference. In the second case, a 26-year-old female who had also penetrating keratoplasty was fitted with a hydrogel toric lens of high cylinder in the right eye. The final hydrogel lenses for the first subject incorporated a custom tricurve design, in which the second curve was steeper than the base curve and the third curve flatter than the second but still steeper than the first. Visual acuity was 6/7.5 RE and a mediocre 6/15 LE (OU 6/7.5). The second subject achieved 6/4.5 acuity RE with the high cylinder hydrogel toric lens. In corneas exhibiting extreme protrusion, such as keratoglobus and some cases after penetrating keratoplasty, curvatures are so extreme and the cornea so globular leading to specific fitting options: sclerals, small diameter RGPs and reverse geometry hydrogel lenses, in order to improve lens and optical stability. In selected cases such as the above, large diameter inverse geometry RGP may be fitted only if the eyelid shape and tension permits so. The first case demonstrates that the option of hydrogel lenses is viable when the patient has no interest in RGPs and in certain cases can improve vision to satisfactory levels. In other cases, graft toricity might be so high that the practitioner will need to employ hydrogel torics with large amounts of cylinder in order to correct vision. In such cases, the patient should be closely monitored in order to avoid complications from hypoxia.
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
NASA Astrophysics Data System (ADS)
Mandel, Kaisey S.; Scolnic, Daniel M.; Shariff, Hikmatali; Foley, Ryan J.; Kirshner, Robert P.
2017-06-01
Conventional Type Ia supernova (SN Ia) cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (M B versus B - V) slope {β }{int} differs from the host galaxy dust law R B , this convolution results in a specific curve of mean extinguished absolute magnitude versus apparent color. The derivative of this curve smoothly transitions from {β }{int} in the blue tail to R B in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope {β }{app} between {β }{int} and R B . We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a data set of SALT2 optical light curve fits of 248 nearby SNe Ia at z< 0.10. The conventional linear fit gives {β }{app}≈ 3. Our model finds {β }{int}=2.3+/- 0.3 and a distinct dust law of {R}B=3.8+/- 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ˜0.10 mag in the tails of the apparent color distribution. Finally, we extend our model to examine the SN Ia luminosity-host mass dependence in terms of intrinsic and dust components.
Applications of data compression techniques in modal analysis for on-orbit system identification
NASA Technical Reports Server (NTRS)
Carlin, Robert A.; Saggio, Frank; Garcia, Ephrahim
1992-01-01
Data compression techniques have been investigated for use with modal analysis applications. A redundancy-reduction algorithm was used to compress frequency response functions (FRFs) in order to reduce the amount of disk space necessary to store the data and/or save time in processing it. Tests were performed for both single- and multiple-degree-of-freedom (SDOF and MDOF, respectively) systems, with varying amounts of noise. Analysis was done on both the compressed and uncompressed FRFs using an SDOF Nyquist curve fit as well as the Eigensystem Realization Algorithm. Significant savings were realized with minimal errors incurred by the compression process.
Yaxx: Yet another X-ray extractor
NASA Astrophysics Data System (ADS)
Aldcroft, Tom
2013-06-01
Yaxx is a Perl script that facilitates batch data processing using Perl open source software and commonly available software such as CIAO/Sherpa, S-lang, SAS, and FTOOLS. For Chandra and XMM analysis it includes automated spectral extraction, fitting, and report generation. Yaxx can be run without climbing an extensive learning curve; even so, yaxx is highly configurable and can be customized to support complex analysis. yaxx uses template files and takes full advantage of the unique Sherpa / S-lang environment to make much of the processing user configurable. Although originally developed with an emphasis on X-ray data analysis, yaxx evolved to be a general-purpose pipeline scripting package.
Peripheral absolute threshold spectral sensitivity in retinitis pigmentosa.
Massof, R W; Johnson, M A; Finkelstein, D
1981-01-01
Dark-adapted spectral sensitivities were measured in the peripheral retinas of 38 patients diagnosed as having typical retinitis pigmentosa (RP) and in 3 normal volunteers. The patients included those having autosomal dominant and autosomal recessive inheritance patterns. Results were analysed by comparisons with the CIE standard scotopic spectral visibility function and with Judd's modification of the photopic spectral visibility function, with consideration of contributions from changes in spectral transmission of preretinal media. The data show 3 general patterns. One group of patients had absolute threshold spectral sensitivities that were fit by Judd's photopic visibility curve. Absolute threshold spectral sensitivities for a second group of patients were fit by a normal scotopic spectral visibility curve. The third group of patients had absolute threshold spectral sensitivities that were fit by a combination of scotopic and photopic spectral visibility curves. The autosomal dominant and autosomal recessive modes of inheritance were represented in each group of patients. These data indicate that RP patients have normal rod and/or cone spectral sensitivities, and support the subclassification of patients described previously by Massof and Finkelstein. PMID:7459312
Aleatory Uncertainty and Scale Effects in Computational Damage Models for Failure and Fragmentation
2014-09-01
larger specimens, small specimens have, on average, higher strengths. Equivalently, because curves for small specimens fall below those of larger...the material strength associated with each realization parameter R in Equation (7), and strength distribution curves associated with multiple...effects in brittle media [58], which applies micromorphological dimensional analysis to obtain a universal curve which closely fits rate-dependent
1985-05-01
distribution, was evaluation of phase shift through best fit of assumed to be the beam response to the microwave theoretical curves and experimental...vibration sidebands o Acceleration as shown in the lower calculated curve . o High-Temperature Exposure o Thermal Vacuum Two of the curves show actual phase ...conclude that the method to measure the phase noise with spectrum estimation is workable, and it has no principle limitation. From the curve it has been
ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…
NASA Astrophysics Data System (ADS)
Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.
2009-09-01
SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
Discrete Gust Model for Launch Vehicle Assessments
NASA Technical Reports Server (NTRS)
Leahy, Frank B.
2008-01-01
Analysis of spacecraft vehicle responses to atmospheric wind gusts during flight is important in the establishment of vehicle design structural requirements and operational capability. Typically, wind gust models can be either a spectral type determined by a random process having a wide range of wavelengths, or a discrete type having a single gust of predetermined magnitude and shape. Classical discrete models used by NASA during the Apollo and Space Shuttle Programs included a 9 m/sec quasi-square-wave gust with variable wavelength from 60 to 300 m. A later study derived discrete gust from a military specification (MIL-SPEC) document that used a "1-cosine" shape. The MIL-SPEC document contains a curve of non-dimensional gust magnitude as a function of non-dimensional gust half-wavelength based on the Dryden spectral model, but fails to list the equation necessary to reproduce the curve. Therefore, previous studies could only estimate a value of gust magnitude from the curve, or attempt to fit a function to it. This paper presents the development of the MIL-SPEC curve, and provides the necessary information to calculate discrete gust magnitudes as a function of both gust half-wavelength and the desired probability level of exceeding a specified gust magnitude.
García-Garrido, C; Sánchez-Jiménez, P E; Pérez-Maqueda, L A; Perejón, A; Criado, José M
2016-10-26
The polymer-to-ceramic transformation kinetics of two widely employed ceramic precursors, 1,3,5,7-tetramethyl-1,3,5,7-tetravinylcyclotetrasiloxane (TTCS) and polyureamethylvinylsilazane (CERASET), have been investigated using coupled thermogravimetry and mass spectrometry (TG-MS), Raman, XRD and FTIR. The thermally induced decomposition of the pre-ceramic polymer is the critical step in the synthesis of polymer derived ceramics (PDCs) and accurate kinetic modeling is key to attaining a complete understanding of the underlying process and to attempt any behavior predictions. However, obtaining a precise kinetic description of processes of such complexity, consisting of several largely overlapping physico-chemical processes comprising the cleavage of the starting polymeric network and the release of organic moieties, is extremely difficult. Here, by using the evolved gases detected by MS as a guide it has been possible to determine the number of steps that compose the overall process, which was subsequently resolved using a semiempirical deconvolution method based on the Frasier-Suzuki function. Such a function is more appropriate that the more usual Gaussian or Lorentzian functions since it takes into account the intrinsic asymmetry of kinetic curves. Then, the kinetic parameters of each constituent step were independently determined using both model-free and model-fitting procedures, and it was found that the processes obey mostly diffusion models which can be attributed to the diffusion of the released gases through the solid matrix. The validity of the obtained kinetic parameters was tested not only by the successful reconstruction of the original experimental curves, but also by predicting the kinetic curves of the overall processes yielded by different thermal schedules and by a mixed TTCS-CERASET precursor.
Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don J.
2010-01-01
Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
Comparison between two scalar field models using rotation curves of spiral galaxies
NASA Astrophysics Data System (ADS)
Fernández-Hernández, Lizbeth M.; Rodríguez-Meza, Mario A.; Matos, Tonatiuh
2018-04-01
Scalar fields have been used as candidates for dark matter in the universe, from axions with masses ∼ 10-5eV until ultra-light scalar fields with masses ∼ Axions behave as cold dark matter while the ultra-light scalar fields galaxies are Bose-Einstein condensate drops. The ultra-light scalar fields are also called scalar field dark matter model. In this work we study rotation curves for low surface brightness spiral galaxies using two scalar field models: the Gross-Pitaevskii Bose-Einstein condensate in the Thomas-Fermi approximation and a scalar field solution of the Klein-Gordon equation. We also used the zero disk approximation galaxy model where photometric data is not considered, only the scalar field dark matter model contribution to rotation curve is taken into account. From the best-fitting analysis of the galaxy catalog we use, we found the range of values of the fitting parameters: the length scale and the central density. The worst fitting results (values of χ red2 much greater than 1, on the average) were for the Thomas-Fermi models, i.e., the scalar field dark matter is better than the Thomas- Fermi approximation model to fit the rotation curves of the analysed galaxies. To complete our analysis we compute from the fitting parameters the mass of the scalar field models and two astrophysical quantities of interest, the dynamical dark matter mass within 300 pc and the characteristic central surface density of the dark matter models. We found that the value of the central mass within 300 pc is in agreement with previous reported results, that this mass is ≈ 107 M ⊙/pc2, independent of the dark matter model. And, on the contrary, the value of the characteristic central surface density do depend on the dark matter model.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
NASA Astrophysics Data System (ADS)
Madi, Raneem; de Rooij, Gerrit; Mai, Juliane; Mielenz, Henrike
2016-04-01
Flow of liquid water and movement of water vapor in the unsaturated zone affect in-soil processes (e.g., root water uptake) and exchanges of water between the soil and the groundwater (e.g., aquifer recharge) and between the soil and the atmosphere (e.g., evaporation). Evapotranspiration in particular is a key factor in the way soils moderate weather and respond to climate change. Soil physicists typically model these processes at scales of individual fields and smaller. They solve Richards' equation using soil water retention curves and hydraulic conductivity curves (soil hydraulic property curves) that are typically valid for even smaller soil volumes. Over the years, many parametric expressions have been proposed as models for the soil hydraulic property curves. Before Richards' equation and the associated soil hydraulic properties can be upscaled or modified for use on scales that are more useful for climate modeling and other applications of practical relevance, the small scale soil hydraulic property curves should at least perform well on the scale for which they were originally developed. Research over the past couple of decades revealed that the fit of soil water retention curves in the dry end is often quite poor, which is particularly risky when vapor flow is a significant factor. It also emerged that the shape of the retention curve for matric potentials very close to zero can generate physically unrealistic behavior of the hydraulic conductivity near saturation when combined with a popular class of conductivity models. We critically examined most of the existing soil water retention parameterizations with respect to these two aspects, and introduced minor modifications to a few of them to improve their performance. The presentation will highlight the results of this review, and demonstrate the effect on calculated fluxes of liquid water and water vapor in soils for illustrative hypothetical scenarios.
Craniofacial Reconstruction Using Rational Cubic Ball Curves
Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan
2015-01-01
This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
NASA Astrophysics Data System (ADS)
Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.
2007-12-01
An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.
Early-Time Observations of the GRB 050319 Optical Transient
NASA Astrophysics Data System (ADS)
Quimby, R. M.; Rykoff, E. S.; Yost, S. A.; Aharonian, F.; Akerlof, C. W.; Alatalo, K.; Ashley, M. C. B.; Göǧüş, E.; Güver, T.; Horns, D.; Kehoe, R. L.; Kιzιloǧlu, Ü.; Mckay, T. A.; Özel, M.; Phillips, A.; Schaefer, B. E.; Smith, D. A.; Swan, H. F.; Vestrand, W. T.; Wheeler, J. C.; Wren, J.
2006-03-01
We present the unfiltered ROTSE-III light curve of the optical transient associated with GRB 050319 beginning 4 s after the cessation of γ-ray activity. We fit a power-law function to the data using the revised trigger time given by Chincarini and coworkers, and a smoothly broken power-law to the data using the original trigger disseminated through the GCN notices. Including the RAPTOR data from Woźniak and coworkers, the best-fit power-law indices are α=-0.854+/-0.014 for the single power-law and α1=-0.364+0.020-0.019, α2=-0.881+0.030-0.031, with a break at tb=418+31-30 s for the smoothly broken fit. We discuss the fit results, with emphasis placed on the importance of knowing the true start time of the optical transient for this multipeaked burst. As Swift continues to provide prompt GRB locations, it becomes more important to answer the question, ``when does the afterglow begin?'' in order to correctly interpret the light curves.
Investigation of the Failure Modes in a Metal Matrix Composite under Thermal Cycling
1989-12-01
Material Characteristics. . .......... ... 76 Sectioning and SEN Photograp’... . ........ . 86 Residual Stress Analysis using .TCAN ... ....... 99 i VI...Specimen Fitted with Strain Gages ..... ........... 77 39. Modulus and Poisson’s Ratio versus Thermal Cycles . . 79 1 40 Stress /Strain Curve for Uncycled...Specimen .... ......... 82 1 41. Stress /Strain Curve for Specimen 8 (5250 Cycles) ..... .83 42. Comparison of Uncycled to Cycled Stress /Strain Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX
2015-06-15
Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less
Rotation curve for the Milky Way galaxy in conformal gravity
NASA Astrophysics Data System (ADS)
O'Brien, James G.; Moss, Robert J.
2015-05-01
Galactic rotation curves have proven to be the testing ground for dark matter bounds in galaxies, and our own Milky Way is one of many large spiral galaxies that must follow the same models. Over the last decade, the rotation of the Milky Way galaxy has been studied and extended by many authors. Since the work of conformal gravity has now successfully fit the rotation curves of almost 140 galaxies, we present here the fit to our own Milky Way. However, the Milky Way is not just an ordinary galaxy to append to our list, but instead provides a robust test of a fundamental difference of conformal gravity rotation curves versus standard cold dark matter models. It was shown by Mannheim and O'Brien that in conformal gravity, the presence of a quadratic potential causes the rotation curve to eventually fall off after its flat portion. This effect can currently be seen in only a select few galaxies whose rotation curve is studied well beyond a few multiples of the optical galactic scale length. Due to the recent work of Sofue et al and Kundu et al, the rotation curve of the Milky Way has now been studied to a degree where we can test the predicted fall off in the conformal gravity rotation curve. We find that - like the other galaxies already studied in conformal gravity - we obtain amazing agreement with rotational data and the prediction includes the eventual fall off at large distances from the galactic center.
A two-dimensional graphing program for the Tektronix 4050-series graphics computers
Kipp, K.L.
1983-01-01
A refined, two-dimensional graph-plotting program was developed for use on Tektronix 4050-series graphics computers. Important features of this program include: any combination of logarithmic and linear axes, optional automatic scaling and numbering of the axes, multiple-curve plots, character or drawn symbol-point plotting, optional cartridge-tape data input and plot-format storage, optional spline fitting for smooth curves, and built-in data-editing options. The program is run while the Tektronix is not connected to any large auxiliary computer, although data from files on an auxiliary computer easily can be transferred to data-cartridge for later plotting. The user is led through the plot-construction process by a series of questions and requests for data input. Five example plots are presented to illustrate program capability and the sequence of program operation. (USGS)
Fukui, Atsuko; Fujii, Ryuta; Yonezawa, Yorinobu; Sunada, Hisakazu
2002-11-01
The release properties of phenylpropanolamine hydrochloride (PPA) from ethylcellulose (EC, ethylcellulose 10 cps (EC#10) and/or 100 cps (EC#100)) matrix granules prepared by the extrusion granulation method were examined. The release process could be divided into two parts, and was well analyzed by applying square-root time law and cube root law equations, respectively. The validity of the treatments was confirmed by the fitness of the simulation curve with the measured curve. At the initial stage, PPA was released from the gel layer of swollen EC in the matrix granules. At the second stage, the drug existing below the gel layer dissolved, and was released through the gel layer. Also, the time and release ratio at the connection point of the simulation curves was examined to determine the validity of the analysis. Comparing the release properties of PPA from the two types of EC matrix granules, EC#100 showed more effective sustained release than EC#10. On the other hand, changes in the release property of the EC#10 matrix granule were relatively more clear than that of the EC#100 matrix granule. Thus, it was supposed that EC#10 is more available for controlled and sustained release formulations than EC#100.
Application of Titration-Based Screening for the Rapid Pilot Testing of High-Throughput Assays.
Zhang, Ji-Hu; Kang, Zhao B; Ardayfio, Ophelia; Ho, Pei-i; Smith, Thomas; Wallace, Iain; Bowes, Scott; Hill, W Adam; Auld, Douglas S
2014-06-01
Pilot testing of an assay intended for high-throughput screening (HTS) with small compound sets is a necessary but often time-consuming step in the validation of an assay protocol. When the initial testing concentration is less than optimal, this can involve iterative testing at different concentrations to further evaluate the pilot outcome, which can be even more time-consuming. Quantitative HTS (qHTS) enables flexible and rapid collection of assay performance statistics, hits at different concentrations, and concentration-response curves in a single experiment. Here we describe the qHTS process for pilot testing in which eight-point concentration-response curves are produced using an interplate asymmetric dilution protocol in which the first four concentrations are used to represent the range of typical HTS screening concentrations and the last four concentrations are added for robust curve fitting to determine potency/efficacy values. We also describe how these data can be analyzed to predict the frequency of false-positives, false-negatives, hit rates, and confirmation rates for the HTS process as a function of screening concentration. By taking into account the compound pharmacology, this pilot-testing paradigm enables rapid assessment of the assay performance and choosing the optimal concentration for the large-scale HTS in one experiment. © 2013 Society for Laboratory Automation and Screening.
NASA Astrophysics Data System (ADS)
Martí-Vidal, I.; Marcaide, J. M.; Alberdi, A.; Guirado, J. C.; Pérez-Torres, M. A.; Ros, E.
2011-02-01
We report on a simultaneous modelling of the expansion and radio light curves of the supernova SN1993J. We developed a simulation code capable of generating synthetic expansion and radio light curves of supernovae by taking into consideration the evolution of the expanding shock, magnetic fields, and relativistic electrons, as well as the finite sensitivity of the interferometric arrays used in the observations. Our software successfully fits all the available radio data of SN 1993J with a standard emission model for supernovae, which is extended with some physical considerations, such as an evolution in the opacity of the ejecta material, a radial decline in the magnetic fields within the radiating region, and a changing radial density profile for the circumstellar medium starting from day 3100 after the explosion.
POSSIBLE TRANSIT TIMING VARIATIONS OF THE TrES-3 PLANETARY SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Ing-Guey; Wu, Yu-Ting; Chien, Ping
2013-03-15
Five newly observed transit light curves of the TrES-3 planetary system are presented. Together with other light-curve data from the literature, 23 transit light curves in total, which cover an overall timescale of 911 epochs, have been analyzed through a standard procedure. From these observational data, the system's orbital parameters are determined and possible transit timing variations (TTVs) are investigated. Given that a null TTV produces a fit with reduced {chi}{sup 2} = 1.52, our results agree with previous work, that TTVs might not exist in these data. However, a one-frequency oscillating TTV model, giving a fit with a reducedmore » {chi}{sup 2} = 0.93, does possess a statistically higher probability. It is thus concluded that future observations and dynamical simulations for this planetary system will be very important.« less
PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less
Radial dependence of the dark matter distribution in M33
NASA Astrophysics Data System (ADS)
López Fune, E.; Salucci, P.; Corbelli, E.
2017-06-01
The stellar and gaseous mass distributions, as well as the extended rotation curve, in the nearby galaxy M33 are used to derive the radial distribution of dark matter density in the halo and to test cosmological models of galaxy formation and evolution. Two methods are examined to constrain the dark mass density profiles. The first method deals directly with fitting the rotation curve data in the range of galactocentric distances 0.24 ≤ r ≤ 22.72 kpc. Using the results of collisionless Λ cold dark matter numerical simulations, we confirm that the Navarro-Frenkel-White (NFW) dark matter profile provides a better fit to the rotation curve data than the cored Burkert profile (BRK) profile. The second method relies on the local equation of centrifugal equilibrium and on the rotation curve slope. In the aforementioned range of distances, we fit the observed velocity profile, using a function that has a rational dependence on the radius, and we derive the slope of the rotation curve. Then, we infer the effective matter densities. In the radial range 9.53 ≤ r ≤ 22.72 kpc, the uncertainties induced by the luminous matter (stars and gas) become negligible, because the dark matter density dominates, and we can determine locally the radial distribution of dark matter. With this second method, we tested the NFW and BRK dark matter profiles and we can confirm that both profiles are compatible with the data, even though in this case the cored BRK density profile provides a more reasonable value for the baryonic-to-dark matter ratio.
Contribution to the benchmark for ternary mixtures: Transient analysis in microgravity conditions.
Ahadi, Amirhossein; Ziad Saghir, M
2015-04-01
We present a transient experimental analysis of the DCMIX1 project conducted onboard the International Space Station for a ternary tetrahydronaphtalene, isobutylbenzene, n-dodecane mixture. Raw images taken in microgravity environment using the SODI (Selectable Optical Diagnostic) apparatus which is equipped with two wavelength diagnostic were processed and the results were analyzed in this work. We measured the concentration profile of the mixture containing 80% THN, 10% IBB and 10% nC12 during the entire experiment using an advanced image processing technique and accordingly we determined the Soret coefficients using an advanced curve-fitting and post-processing technique. It must be noted that the experiment has been repeated five times to ensure the repeatability of the experiment.
NASA Astrophysics Data System (ADS)
Yan, Shiguang; Mao, Chaoliang; Wang, Genshui; Yao, Chunhua; Cao, Fei; Dong, Xianlin
2013-09-01
The current decay characteristic in the time domain is studied in Y3+ and Mn2+ modified Ba0.67Sr0.33TiO3 ceramics under different temperatures (25 °C-213 °C) and voltage stresses (0 V-800 V). The decay of the current is correlated with the overlapping of the relaxation process and leakage current. With respect to the inherent remarkable dielectric nonlinearity, a simple method through curve fitting is derived to differentiate these two currents. Two mechanisms of the relaxation process are proposed: a distribution of the potential barriers mode around room temperature and an electron injection mode at the elevated temperature of 110 °C.
Data Validation in the Kepler Science Operations Center Pipeline
NASA Technical Reports Server (NTRS)
Wu, Hayley; Twicken, Joseph D.; Tenenbaum, Peter; Clarke, Bruce D.; Li, Jie; Quintana, Elisa V.; Allen, Christopher; Chandrasekaran, Hema; Jenkins, Jon M.; Caldwell, Douglas A.;
2010-01-01
We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed. Keywords: photometry, data validation, Kepler, Earth-size planets
The mass of the black hole in 1A 0620-00, revisiting the ellipsoidal light curve modelling
NASA Astrophysics Data System (ADS)
van Grunsven, Theo F. J.; Jonker, Peter G.; Verbunt, Frank W. M.; Robinson, Edward L.
2017-12-01
The mass distribution of stellar-mass black holes can provide important clues to supernova modelling, but observationally it is still ill constrained. Therefore, it is of importance to make black hole mass measurements as accurate as possible. The X-ray transient 1A 0620-00 is well studied, with a published black hole mass of 6.61 ± 0.25 M⊙, based on an orbital inclination i of 51.0° ± 0.9°. This was obtained by Cantrell et al. (2010) as an average of independent fits to V-, I- and H-band light curves. In this work, we perform an independent check on the value of i by re-analysing existing YALO/SMARTS V-, I- and H-band photometry, using different modelling software and fitting strategy. Performing a fit to the three light curves simultaneously, we obtain a value for i of 54.1° ± 1.1°, resulting in a black hole mass of 5.86 ± 0.24 M⊙. Applying the same model to the light curves individually, we obtain 58.2° ± 1.9°, 53.6° ± 1.6° and 50.5° ± 2.2° for V-, I- and H-band, respectively, where the differences in best-fitting i are caused by the contribution of the residual accretion disc light in the three different bands. We conclude that the mass determination of this black hole may still be subject to systematic effects exceeding the statistical uncertainty. Obtaining more accurate masses would be greatly helped by continuous phase-resolved spectroscopic observations simultaneous with photometry.
NASA Astrophysics Data System (ADS)
Swensson, Richard G.; King, Jill L.; Good, Walter F.; Gur, David
2000-04-01
A constrained ROC formulation from probability summation is proposed for measuring observer performance in detecting abnormal findings on medical images. This assumes the observer's detection or rating decision on each image is determined by a latent variable that characterizes the specific finding (type and location) considered most likely to be a target abnormality. For positive cases, this 'maximum- suspicion' variable is assumed to be either the value for the actual target or for the most suspicious non-target finding, whichever is the greater (more suspicious). Unlike the usual ROC formulation, this constrained formulation guarantees a 'well-behaved' ROC curve that always equals or exceeds chance- level decisions and cannot exhibit an upward 'hook.' Its estimated parameters specify the accuracy for separating positive from negative cases, and they also predict accuracy in locating or identifying the actual abnormal findings. The present maximum-likelihood procedure (runs on PC with Windows 95 or NT) fits this constrained formulation to rating-ROC data using normal distributions with two free parameters. Fits of the conventional and constrained ROC formulations are compared for continuous and discrete-scale ratings of chest films in a variety of detection problems, both for localized lesions (nodules, rib fractures) and for diffuse abnormalities (interstitial disease, infiltrates or pnumothorax). The two fitted ROC curves are nearly identical unless the conventional ROC has an ill behaved 'hook,' below the constrained ROC.
NASA Astrophysics Data System (ADS)
Milani, G.; Milani, F.
A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.
Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggieri, Ruggero, E-mail: ruggieri.ruggero@gmail.com; Stavreva, Nadejda; Naccarato, Stefania
2012-08-01
Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as duemore » to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.« less
Mathematical and Statistical Software Index.
1986-08-01
geometric) mean HMEAN - harmonic mean MEDIAN - median MODE - mode QUANT - quantiles OGIVE - distribution curve IQRNG - interpercentile range RANGE ... range mutliphase pivoting algorithm cross-classification multiple discriminant analysis cross-tabul ation mul tipl e-objecti ve model curve fitting...Statistics). .. .. .... ...... ..... ...... ..... .. 21 *RANGEX (Correct Correlations for Curtailment of Range ). .. .. .... ...... ... 21 *RUMMAGE II (Analysis
A Software Tool for the Rapid Analysis of the Sintering Behavior of Particulate Bodies
2017-11-01
bounded by a region that the user selects via cross hairs . Future plot analysis features, such as more complicated curve fitting and modeling functions...German RM. Grain growth behavior of tungsten heavy alloys based on the master sintering curve concept. Metallurgical and Materials Transactions A
The utility of laboratory animal data in toxicology depends upon the ability to generalize the results quantitatively to humans. To compare the acute behavioral effects of inhaled toluene in humans to those in animals, dose-effect curves were fitted by meta-analysis of published...
Annual variation in the atmospheric radon concentration in Japan.
Kobayashi, Yuka; Yasuoka, Yumi; Omori, Yasutaka; Nagahama, Hiroyuki; Sanada, Tetsuya; Muto, Jun; Suzuki, Toshiyuki; Homma, Yoshimi; Ihara, Hayato; Kubota, Kazuhito; Mukai, Takahiro
2015-08-01
Anomalous atmospheric variations in radon related to earthquakes have been observed in hourly exhaust-monitoring data from radioisotope institutes in Japan. The extraction of seismic anomalous radon variations would be greatly aided by understanding the normal pattern of variation in radon concentrations. Using atmospheric daily minimum radon concentration data from five sampling sites, we show that a sinusoidal regression curve can be fitted to the data. In addition, we identify areas where the atmospheric radon variation is significantly affected by the variation in atmospheric turbulence and the onshore-offshore pattern of Asian monsoons. Furthermore, by comparing the sinusoidal regression curve for the normal annual (seasonal) variations at the five sites to the sinusoidal regression curve for a previously published dataset of radon values at the five Japanese prefectures, we can estimate the normal annual variation pattern. By fitting sinusoidal regression curves to the previously published dataset containing sites in all Japanese prefectures, we find that 72% of the Japanese prefectures satisfy the requirements of the sinusoidal regression curve pattern. Using the normal annual variation pattern of atmospheric daily minimum radon concentration data, these prefectures are suitable areas for obtaining anomalous radon variations related to earthquakes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Runoff potentiality of a watershed through SCS and functional data analysis technique.
Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique
Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911
Can Tooth Preparation Design Affect the Fit of CAD/CAM Restorations?
Roperto, Renato Cassio; Oliveira, Marina Piolli; Porto, Thiago Soares; Ferreira, Lais Alaberti; Melo, Lucas Simino; Akkus, Anna
2017-03-01
The purpose of this study was to evaluate if the marginal fit of computer-aided design and computer-aided manufacturing (CAD/CAM) restorations manufactured with CAD/CAM systems can be affected by different tooth preparation designs. Twenty-six typodont (plastic) teeth were divided into two groups (n = 13) according to the occlusal curvature of the tooth preparation. These were the group 1 (control group) (flat occlusal design) and group 2 (curved occlusal design). Scanning of the preparations was performed, and crowns were milled using ceramic blocks. Blocks were cemented using epoxy glue on the pulpal floor only, and finger pressure was applied for 1 minute. On completion of the cementation step, poor fits between the restoration and abutment were measured by microphotography and the silicone replica technique using light-body silicon material on mesial, distal, buccal, and lingual surfaces. Two-way ANOVA analysis did not reveal a statistical difference between flat (83.61 ± 50.72) and curved (79.04 ± 30.97) preparation designs. Buccal, mesial, lingual, and distal sites on the curved design preparation showed less of a gap when compared with flat design. No difference was found on flat preparations among mesial, buccal, and distal sites (P < .05). The lingual aspect had no difference from the distal side but showed a statistically significant difference from mesial and buccal (P < .05). Difference in occlusal design did not significantly impact the marginal fit. Marginal fit was significantly affected by the location of the margin; lingual and distal locations exhibited greater margin gap values compared with buccal and mesial sites regardless of the preparation design.
NASA Astrophysics Data System (ADS)
Liu, H. L.; Zhao, B. Y.; Yu, W. D.
2013-04-01
In this study, estimation of structure was accomplished with the use of deconvolution, secondary derivation and curve-fitting. The structural changes of slenderized yak hair treated by heat-humidity conditions were quantified by analyzing the disulfide bond (S-S), amide I and amide III regions. The results showed that the amount of the disulphide bond in the yak hair decreases with the increase of treating time. The secondary structure of yak hair transforms from the α-helix and β pleated to the disordered conformation during the heat humidity processing.
1992-01-01
studied . shows the B-spline fit on the grouped curves and the local symmetries detected (their axes) (output of steps 1 and 4 OBJECT RECOGNITION 2.a...positioned so that the specular Our primary study (Krumm and Shafer) has been on lobes of each light source do not intersect. The four lights the...segregation with a 3D representation is a con- The problem of dot clustering can also be studied from sequence of grouping processes. A 3D
Intensity Conserving Spectral Fitting
NASA Technical Reports Server (NTRS)
Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.
2015-01-01
The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.
High pressure melting curve of platinum up to 35 GPa
NASA Astrophysics Data System (ADS)
Patel, Nishant N.; Sunder, Meenakshi
2018-04-01
Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.
[Keratoconus special soft contact lens fitting].
Yamazaki, Ester Sakae; da Silva, Vanessa Cristina Batista; Morimitsu, Vagner; Sobrinho, Marcelo; Fukushima, Nelson; Lipener, César
2006-01-01
To evaluate the fitting and use of a soft contact lens in keratoconic patients. Retrospective study on 80 eyes of 66 patients, fitted with a special soft contact lens for keratoconus, at the Contact Lens Section of UNIFESP and private clinics. Keratoconus was classified according to degrees of disease severity by keratometric pattern. Age, gender, diagnosis, keratometry, visual acuity, spherical equivalent (SE), base curve and clinical indication were recorded. Of 66 patients (80 eyes) with keratoconus the mean age was 29 years, 51.5% were men and 48.5% women. According to the groups: 15.0% were incipient, 53.7% moderate, 26.3% advanced and 5.0% were severe. The majority of the eyes of patients using contact lenses (91.25%) achieved visual acuity better than 20/40. To 88 eyes 58% were tihed with lens with spherical power (mean -5.45 diopters) and 41% with spherocylinder power (from -0.5 to -5.00 cylindrical diopters). The most frequent base curve was 7.6 in 61% of the eyes. The main reasons for this special lens fitting were due to reduced tolerance and poor fitting pattern achieved with other lenses. The special soft contact lens is useful in fitting difficult keratoconic patients by offering comfort and improving visual rehabilitation that may allow more patients to postpone the need for corneal transplant.
Why "suboptimal" is optimal: Jensen's inequality and ectotherm thermal preferences.
Martin, Tara Laine; Huey, Raymond B
2008-03-01
Body temperature (T(b)) profoundly affects the fitness of ectotherms. Many ectotherms use behavior to control T(b) within narrow levels. These temperatures are assumed to be optimal and therefore to match body temperatures (Trmax) that maximize fitness (r). We develop an optimality model and find that optimal body temperature (T(o)) should not be centered at Trmax but shifted to a lower temperature. This finding seems paradoxical but results from two considerations relating to Jensen's inequality, which deals with how variance and skew influence integrals of nonlinear functions. First, ectotherms are not perfect thermoregulators and so experience a range of T(b). Second, temperature-fitness curves are asymmetric, such that a T(b) higher than Trmax depresses fitness more than will a T(b) displaced an equivalent amount below Trmax. Our model makes several predictions. The magnitude of the optimal shift (Trmax - To) should increase with the degree of asymmetry of temperature-fitness curves and with T(b) variance. Deviations should be relatively large for thermal specialists but insensitive to whether fitness increases with Trmax ("hotter is better"). Asymmetric (left-skewed) T(b) distributions reduce the magnitude of the optimal shift but do not eliminate it. Comparative data (insects, lizards) support key predictions. Thus, "suboptimal" is optimal.
Investigation of skin structures based on infrared wave parameter indirect microscopic imaging
NASA Astrophysics Data System (ADS)
Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan
2017-02-01
Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.
Effect analysis of oil paint on the space optical contamination
NASA Astrophysics Data System (ADS)
Lu, Chun-lian; Lv, He; Han, Chun-xu; Wei, Hai-Bin
2013-08-01
The space contamination of spacecraft surface is a hot topic in the spacecraft environment project and environment safeguard for spacecraft. Since the 20th century, many American satellites have had malfunction for space contamination. The space optical systems are usually exposed to the external space environment. The particulate contamination of optical systems will degrade the detection ability. We call the optical damage. It also has a bad influence on the spectral imaging quality of the whole system. In this paper, effects of contamination on spectral imaging were discussed. The experiment was designed to observe the effect value. We used numeral curve fitting to analyze the relationship between the optical damage factor (Transmittance decay factor) and the contamination degree of the optical system. We gave the results of six specific wavelengths from 450 to 700nm and obtained the function of between the optical damage factor and contamination degree. We chose three colors of oil paint to be compared. Through the numeral curve fitting and processing data, we could get the mass thickness for different colors of oil paint when transmittance decreased to 50% and 30%. Some comparisons and research conclusions were given. From the comparisons and researches, we could draw the conclusions about contamination effects of oil paint on the spectral imaging system.
NASA Astrophysics Data System (ADS)
Podder, M. S.; Majumder, C. B.
2016-11-01
The optimization of biosorption/bioaccumulation process of both As(III) and As(V) has been investigated by using the biosorbent; biofilm of Corynebacterium glutamicum MTCC 2745 supported on granular activated carbon/MnFe2O4 composite (MGAC). The presence of functional groups on the cell wall surface of the biomass that may interact with the metal ions was proved by FT-IR. To determine the most appropriate correlation for the equilibrium curves employing the procedure of the non-linear regression for curve fitting analysis, isotherm studies were performed for As(III) and As(V) using 30 isotherm models. The pattern of biosorption/bioaccumulation fitted well with Vieth-Sladek isotherm model for As(III) and Brouers-Sotolongo and Fritz-Schlunder-V isotherm models for As(V). The maximum biosorption/bioaccumulation capacity estimated using Langmuir model were 2584.668 mg/g for As(III) and 2651.675 mg/g for As(V) at 30 °C temperature and 220 min contact time. The results showed that As(III) and As(V) removal was strongly pH-dependent with an optimum pH value of 7.0. D-R isotherm studies specified that ion exchange might play a prominent role.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
The light curve of SN 1987A revisited: constraining production masses of radioactive nuclides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitenzahl, Ivo R.; Timmes, F. X.; Magkotsios, Georgios, E-mail: ivo.seitenzahl@anu.edu.au
2014-09-01
We revisit the evidence for the contribution of the long-lived radioactive nuclides {sup 44}Ti, {sup 55}Fe, {sup 56}Co, {sup 57}Co, and {sup 60}Co to the UVOIR light curve of SN 1987A. We show that the V-band luminosity constitutes a roughly constant fraction of the bolometric luminosity between 900 and 1900 days, and we obtain an approximate bolometric light curve out to 4334 days by scaling the late time V-band data by a constant factor where no bolometric light curve data is available. Considering the five most relevant decay chains starting at {sup 44}Ti, {sup 55}Co, {sup 56}Ni, {sup 57}Ni, andmore » {sup 60}Co, we perform a least squares fit to the constructed composite bolometric light curve. For the nickel isotopes, we obtain best fit values of M({sup 56}Ni) = (7.1 ± 0.3) × 10{sup –2} M {sub ☉} and M({sup 57}Ni) = (4.1 ± 1.8) × 10{sup –3} M {sub ☉}. Our best fit {sup 44}Ti mass is M({sup 44}Ti) = (0.55 ± 0.17) × 10{sup –4} M {sub ☉}, which is in disagreement with the much higher (3.1 ± 0.8) × 10{sup –4} M {sub ☉} recently derived from INTEGRAL observations. The associated uncertainties far exceed the best fit values for {sup 55}Co and {sup 60}Co and, as a result, we only give upper limits on the production masses of M({sup 55}Co) < 7.2 × 10{sup –3} M {sub ☉} and M({sup 60}Co) < 1.7 × 10{sup –4} M {sub ☉}. Furthermore, we find that the leptonic channels in the decay of {sup 57}Co (internal conversion and Auger electrons) are a significant contribution and constitute up to 15.5% of the total luminosity. Consideration of the kinetic energy of these electrons is essential in lowering our best fit nickel isotope production ratio to [{sup 57}Ni/{sup 56}Ni] = 2.5 ± 1.1, which is still somewhat high but is in agreement with gamma-ray observations and model predictions.« less
NASA Astrophysics Data System (ADS)
Le, Jia-Liang; Bažant, Zdeněk P.
2011-07-01
This paper extends the theoretical framework presented in the preceding Part I to the lifetime distribution of quasibrittle structures failing at the fracture of one representative volume element under constant amplitude fatigue. The probability distribution of the critical stress amplitude is derived for a given number of cycles and a given minimum-to-maximum stress ratio. The physical mechanism underlying the Paris law for fatigue crack growth is explained under certain plausible assumptions about the damage accumulation in the cyclic fracture process zone at the tip of subcritical crack. This law is then used to relate the probability distribution of critical stress amplitude to the probability distribution of fatigue lifetime. The theory naturally yields a power-law relation for the stress-life curve (S-N curve), which agrees with Basquin's law. Furthermore, the theory indicates that, for quasibrittle structures, the S-N curve must be size dependent. Finally, physical explanation is provided to the experimentally observed systematic deviations of lifetime histograms of various ceramics and bones from the Weibull distribution, and their close fits by the present theory are demonstrated.
Drop shape visualization and contact angle measurement on curved surfaces.
Guilizzoni, Manfredo
2011-12-01
The shape and contact angles of drops on curved surfaces is experimentally investigated. Image processing, spline fitting and numerical integration are used to extract the drop contour in a number of cross-sections. The three-dimensional surfaces which describe the surface-air and drop-air interfaces can be visualized and a simple procedure to determine the equilibrium contact angle starting from measurements on curved surfaces is proposed. Contact angles on flat surfaces serve as a reference term and a procedure to measure them is proposed. Such procedure is not as accurate as the axisymmetric drop shape analysis algorithms, but it has the advantage of requiring only a side view of the drop-surface couple and no further information. It can therefore be used also for fluids with unknown surface tension and there is no need to measure the drop volume. Examples of application of the proposed techniques for distilled water drops on gemstones confirm that they can be useful for drop shape analysis and contact angle measurement on three-dimensional sculptured surfaces. Copyright © 2011 Elsevier Inc. All rights reserved.
Linking the Climate and Thermal Phase Curve of 55 Cancri e
NASA Astrophysics Data System (ADS)
Hammond, Mark; Pierrehumbert, Raymond T.
2017-11-01
The thermal phase curve of 55 Cancri e is the first measurement of the temperature distribution of a tidally locked super-Earth, but raises a number of puzzling questions about the planet’s climate. The phase curve has a high amplitude and peak offset, suggesting that it has a significant eastward hot-spot shift as well as a large day-night temperature contrast. We use a general circulation model to model potential climates, and investigate the relation between bulk atmospheric composition and the magnitude of these seemingly contradictory features. We confirm theoretical models of tidally locked circulation are consistent with our numerical model of 55 Cnc e, and rule out certain atmospheric compositions based on their thermodynamic properties. Our best-fitting atmosphere has a significant hot-spot shift and day-night contrast, although these are not as large as the observed phase curve. We discuss possible physical processes that could explain the observations, and show that night-side cloud formation from species such as SiO from a day-side magma ocean could potentially increase the phase curve amplitude and explain the observations. We conclude that the observations could be explained by an optically thick atmosphere with a low mean molecular weight, a surface pressure of several bars, and a strong eastward circulation, with night-side cloud formation a possible explanation for the difference between our model and the observations.
STACCATO: a novel solution to supernova photometric classification with biased training sets
NASA Astrophysics Data System (ADS)
Revsbech, E. A.; Trotta, R.; van Dyk, D. A.
2018-01-01
We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. - a diffusion map combined with a random forest classifier - to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.
Computer analysis of three-dimensional morphological characteristics of the bile duct
NASA Astrophysics Data System (ADS)
Ma, Jinyuan; Chen, Houjin; Peng, Yahui; Shang, Hua
2017-01-01
In this paper, a computer image-processing algorithm for analyzing the morphological characteristics of bile ducts in Magnetic Resonance Cholangiopancreatography (MRCP) images was proposed. The algorithm consisted of mathematical morphology methods including erosion, closing and skeletonization, and a spline curve fitting method to obtain the length and curvature of the center line of the bile duct. Of 10 cases, the average length of the bile duct was 14.56 cm. The maximum curvature was in the range of 0.111 2.339. These experimental results show that using the computer image-processing algorithm to assess the morphological characteristics of the bile duct is feasible and further research is needed to evaluate its potential clinical values.
Activation Energies of Fragmentations of Disaccharides by Tandem Mass Spectrometry
NASA Astrophysics Data System (ADS)
Kuki, Ákos; Nagy, Lajos; Szabó, Katalin E.; Antal, Borbála; Zsuga, Miklós; Kéki, Sándor
2014-03-01
A simple multiple collision model for collision induced dissociation (CID) in quadrupole was applied for the estimation of the activation energy (Eo) of the fragmentation processes for lithiated and trifluoroacetated disaccharides, such as maltose, cellobiose, isomaltose, gentiobiose, and trehalose. The internal energy-dependent rate constants k(Eint) were calculated using the Rice-Ramsperger-Kassel-Marcus (RRKM) or the Rice-Ramsperger-Kassel (RRK) theory. The Eo values were estimated by fitting the calculated survival yield (SY) curves to the experimental ones. The calculated Eo values of the fragmentation processes for lithiated disaccharides were in the range of 1.4-1.7 eV, and were found to increase in the order trehalose < maltose < isomaltose < cellobiose < gentiobiose.
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
NASA Astrophysics Data System (ADS)
Szalai, Robert; Ehrhardt, David; Haller, George
2017-06-01
In a nonlinear oscillatory system, spectral submanifolds (SSMs) are the smoothest invariant manifolds tangent to linear modal subspaces of an equilibrium. Amplitude-frequency plots of the dynamics on SSMs provide the classic backbone curves sought in experimental nonlinear model identification. We develop here, a methodology to compute analytically both the shape of SSMs and their corresponding backbone curves from a data-assimilating model fitted to experimental vibration signals. This model identification utilizes Taken's delay-embedding theorem, as well as a least square fit to the Taylor expansion of the sampling map associated with that embedding. The SSMs are then constructed for the sampling map using the parametrization method for invariant manifolds, which assumes that the manifold is an embedding of, rather than a graph over, a spectral subspace. Using examples of both synthetic and real experimental data, we demonstrate that this approach reproduces backbone curves with high accuracy.
NASA Astrophysics Data System (ADS)
Meng, Xiao; Wang, Lai; Hao, Zhibiao; Luo, Yi; Sun, Changzheng; Han, Yanjun; Xiong, Bing; Wang, Jian; Li, Hongtao
2016-01-01
Efficiency droop is currently one of the most popular research problems for GaN-based light-emitting diodes (LEDs). In this work, a differential carrier lifetime measurement system is optimized to accurately determine carrier lifetimes (τ) of blue and green LEDs under different injection current (I). By fitting the τ-I curves and the efficiency droop curves of the LEDs according to the ABC carrier rate equation model, the impact of Auger recombination and carrier leakage on efficiency droop can be characterized simultaneously. For the samples used in this work, it is found that the experimental τ-I curves cannot be described by Auger recombination alone. Instead, satisfactory fitting results are obtained by taking both carrier leakage and carriers delocalization into account, which implies carrier leakage plays a more significant role in efficiency droop at high injection level.
ROC analysis of diagnostic performance in liver scintigraphy.
Fritz, S L; Preston, D F; Gallagher, J H
1981-02-01
Studies on the accuracy of liver scintigraphy for the detection of metastases were assembled from 38 sources in the medical literature. An ROC curve was fitted to the observed values of sensitivity and specificity using an algorithm developed by Ogilvie and Creelman. This ROC curve fitted the data better than average sensitivity and specificity values in each of four subsets of the data. For the subset dealing with Tc-99m sulfur colloid scintigraphy, performed for detection of suspected metastases and containing data on 2800 scans from 17 independent series, it was not possible to reject the hypothesis that interobserver variation was entirely due to the use of different decision thresholds by the reporting clinicians. Thus the ROC curve obtained is a reasonable baseline estimate of the performance potentially achievable in today's clinical setting. Comparison of new reports with these data is possible, but is limited by the small sample sizes in most reported series.
Goodford, P J; St-Louis, J; Wootton, R
1978-01-01
1. Oxygen dissociation curves have been measured for human haemoglobin solutions with different concentrations of the allosteric effectors 2,3-diphosphoglycerate, adenosine triphosphate and inositol hexaphosphate. 2. Each effector produces a concentration dependent right shift of the oxygen dissociation curve, but a point is reached where the shift is maximal and increasing the effector concentration has no further effect. 3. Mathematical models based on the Monod, Wyman & Changeux (1965) treatment of allosteric proteins have been fitted to the data. For each compound the simple two-state model and its extension to take account of subunit inequivalence were shown to be inadequate, and a better fit was obtained by allowing the effector to lower the oxygen affinity of the deoxy conformational state as well as binding preferentially to this conformation. PMID:722582
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, S.; Gezari, S.; Heinis, S.
2015-03-20
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.« less
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
NASA Astrophysics Data System (ADS)
Repetto, P.; Martínez-García, E. E.; Rosado, M.; Gabbasov, R.
2018-06-01
In this paper, we derive a novel circular velocity relation for a test particle in a 3D gravitational potential applicable to every system of curvilinear coordinates, suitable to be reduced to orthogonal form. As an illustration of the potentiality of the determined circular velocity expression, we perform the rotation curves analysis of UGC 8490 and UGC 9753 and we estimate the total and dark matter mass of these two galaxies under the assumption that their respective dark matter haloes have spherical, prolate, and oblate spheroidal mass distributions. We employ stellar population synthesis models and the total H I density map to obtain the stellar and H I+He+metals rotation curves of both galaxies. The subtraction of the stellar plus gas rotation curves from the observed rotation curves of UGC 8490 and UGC 9753 generates the dark matter circular velocity curves of both galaxies. We fit the dark matter rotation curves of UGC 8490 and UGC 9753 through the newly established circular velocity formula specialized to the spherical, prolate, and oblate spheroidal mass distributions, considering the Navarro, Frenk, and White, Burkert, Di Cintio, Einasto, and Stadel dark matter haloes. Our principal findings are the following: globally, cored dark matter profiles Burkert and Einasto prevail over cuspy Navarro, Frenk, and White, and Di Cintio. Also, spherical/oblate dark matter models fit better the dark matter rotation curves of both galaxies than prolate dark matter haloes.
Bayesian Analysis of Longitudinal Data Using Growth Curve Models
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.
2007-01-01
Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
Educating about Sustainability while Enhancing Calculus
ERIC Educational Resources Information Center
Pfaff, Thomas J.
2011-01-01
We give an overview of why it is important to include sustainability in mathematics classes and provide specific examples of how to do this for a calculus class. We illustrate that when students use "Excel" to fit curves to real data, fundamentally important questions about sustainability become calculus questions about those curves. (Contains 5…
On the mass of the compact object in the black hole binary A0620-00
NASA Technical Reports Server (NTRS)
Haswell, Carole A.; Robinson, Edward L.; Horne, Keith; Stiening, Rae F.; Abbott, Timothy M. C.
1993-01-01
Multicolor orbital light curves of the black hole candidate binary A0620-00 are presented. The light curves exhibit ellipsoidal variations and a grazing eclipse of the mass donor companion star by the accretion disk. Synthetic light curves were generated using realistic mass donor star fluxes and an isothermal blackbody disk. For mass ratios of q = M sub 1/M sub 2 = 5.0, 10.6, and 15.0 systematic searches were executed in parameter space for synthetic light curves that fit the observations. For each mass ratio, acceptable fits were found only for a small range of orbital inclinations. It is argued that the mass ratio is unlikely to exceed q = 10.6, and an upper limit of 0.8 solar masses is placed on the mass of the companion star. These constraints imply 4.16 +/- 0.1 to 5.55 +/- 0.15 solar masses. The lower limit on M sub 1 is more than 4-sigma above the mass of a maximally rotating neutron star, and constitutes further strong evidence in favor of a black hole primary in this system.
Comparison of three methods for wind turbine capacity factor estimation.
Ditkovich, Y; Kuperman, A
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.
Analysis of ALTAIR 1998 Meteor Radar Data
NASA Technical Reports Server (NTRS)
Zinn, J.; Close, S.; Colestock, P. L.; MacDonell, A.; Loveland, R.
2011-01-01
We describe a new analysis of a set of 32 UHF meteor radar traces recorded with the 422 MHz ALTAIR radar facility in November 1998. Emphasis is on the velocity measurements, and on inferences that can be drawn from them regarding the meteor masses and mass densities. We find that the velocity vs altitude data can be fitted as quadratic functions of the path integrals of the atmospheric densities vs distance, and deceleration rates derived from those fits all show the expected behavior of increasing with decreasing altitude. We also describe a computer model of the coupled processes of collisional heating, radiative cooling, evaporative cooling and ablation, and deceleration - for meteors composed of defined mixtures of mineral constituents. For each of the cases in the data set we ran the model starting with the measured initial velocity and trajectory inclination, and with various trial values of the quantity mPs 2 (the initial mass times the mass density squared), and then compared the computed deceleration vs altitude curves vs the measured ones. In this way we arrived at the best-fit values of the mPs 2 for each of the measured meteor traces. Then further, assuming various trial values of the density Ps, we compared the computed mass vs altitude curves with similar curves for the same set of meteors determined previously from the measured radar cross sections and an electrostatic scattering model. In this way we arrived at estimates of the best-fit mass densities Ps for each of the cases. Keywords meteor ALTAIR radar analysis 1 Introduction This paper describes a new analysis of a set of 422 MHz meteor scatter radar data recorded with the ALTAIR High-Power-Large-Aperture radar facility at Kwajalein Atoll on 18 November 1998. The exceptional accuracy/precision of the ALTAIR tracking data allow us to determine quite accurate meteor trajectories, velocities and deceleration rates. The measurements and velocity/deceleration data analysis are described in Sections II and III. The main point of this paper is to use these deceleration rate data, together with results from a computer model, to determine values of the quantities mPs 2 (the meteor mass times its material density squared); and further, by combining these m s 2 values with meteor mass estimates for the same set of meteors determined separately from measured radar scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less
Scaling laws for light-weight optics
NASA Technical Reports Server (NTRS)
Valente, Tina M.
1990-01-01
Scaling laws for light-weight optical systems are examined. A cubic relationship between mirror diameter and weight has been suggested and used by many designers of optical systems as the best description for all light-weight mirrors. A survey of existing light-weight systems in the open literature has been made to clarify this issue. Fifty existing optical systems were surveyed with all varieties of light-weight mirrors including glass and beryllium structured mirrors, contoured mirrors, and very thin solid mirrors. These mirrors were then categorized and weight to diameter ratio was plotted to find a best fit curve for each case. A best fitting curve program tests nineteen different equations and ranks a 'goodness of fit' for each of these equations. The resulting relationship found for each light-weight mirror category helps to quantify light-weight optical systems and methods of fabrication and provides comparisons between mirror types.
Barnard, M.; Venter, C.; Harding, A. K.
2018-01-01
We performed geometric pulsar light curve modeling using static, retarded vacuum, and offset polar cap (PC) dipole B-fields (the latter is characterized by a parameter ε), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole B-field mimics deviations from the static dipole (which corresponds to ε = 0). In addition to constant-emissivity geometric models, we also considered a slot gap (SG) E-field associated with the offset-PC dipole B-field and found that its inclusion leads to qualitatively different light curves. Solving the particle transport equation shows that the particle energy only becomes large enough to yield significant curvature radiation at large altitudes above the stellar surface, given this relatively low E-field. Therefore, particles do not always attain the radiation-reaction limit. Our overall optimal light curve fit is for the retarded vacuum dipole field and OG model, at an inclination angle α=78−1+1° and observer angle ζ=69−1+2°. For this B-field, the TPC model is statistically disfavored compared to the OG model. For the static dipole field, neither model is significantly preferred. We found that smaller values of ε are favored for the offset-PC dipole field when assuming constant emissivity, and larger ε values favored for variable emissivity, but not significantly so. When multiplying the SG E-field by a factor of 100, we found improved light curve fits, with α and ζ being closer to best fits from independent studies, as well as curvature radiation reaction at lower altitudes. PMID:29681648
Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions.
Joshi, Gaurav V; Duan, Yuanyuan; Della Bona, Alvaro; Hill, Thomas J; St John, Kenneth; Griggs, Jason A
2013-11-01
To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPam(1/2) reaching a plateau at different critical flaw sizes based on loading method. Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions
Joshi, Gaurav V.; Duan, Yuanyuan; Bona, Alvaro Della; Hill, Thomas J.; John, Kenneth St.; Griggs, Jason A.
2013-01-01
Objectives To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Materials and Methods Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. Results There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPa·m1/2 reaching a plateau at different critical flaw sizes based on loading method. Significance Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. PMID:24034441
NASA Technical Reports Server (NTRS)
Barnard, M.; Venter, C.; Harding, A. K.
2016-01-01
We performed geometric pulsar light curve modeling using static, retarded vacuum, and offset polar cap (PC) dipole B-fields (the latter is characterized by a parameter epsilon), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole B-field mimics deviations from the static dipole (which corresponds to epsilon equals 0). In addition to constant-emissivity geometric models, we also considered a slot gap (SG) E-field associated with the offset-PC dipole B-field and found that its inclusion leads to qualitatively different light curves. Solving the particle transport equation shows that the particle energy only becomes large enough to yield significant curvature radiation at large altitudes above the stellar surface, given this relatively low E-field. Therefore, particles do not always attain the radiation-reaction limit. Our overall optimal light curve fit is for the retarded vacuum dipole field and OG model, at an inclination angle alpha equals 78 plus or minus 1 degree and observer angle zeta equals 69 plus 2 degrees or minus 1 degree. For this B-field, the TPC model is statistically disfavored compared to the OG model. For the static dipole field, neither model is significantly preferred. We found that smaller values of epsilon are favored for the offset-PC dipole field when assuming constant emissivity, and larger epsilon values favored for variable emissivity, but not significantly so. When multiplying the SG E-field by a factor of 100, we found improved light curve fits, with alpha and zeta being closer to best fits from independent studies, as well as curvature radiation reaction at lower altitudes.
Dynamic Testing of Laterally Confined Concrete
1990-09-01
for Intermediate Confining pressure (Dashed Curve). 31 23. Example of Regression Fit by Equation (6) for Highest Pressure Group (Dashed Curve... pressure group , loaded by a moderate striker-bar impact speed of 420 in/sec. (10.7 m/s). The peak stress of 124 MPa (18 ksi) occurs at a strain of...survived at one end. This was for the highest speed impact in the lowest confining pressure group . Curves are given in the Appendix Figure A-15. The
Dielectric behavior and AC conductivity of Cr doped α-Mn2O3
NASA Astrophysics Data System (ADS)
Chandra, Mohit; Yadav, Satish; Singh, K.
2018-05-01
The complex dielectric behavior of polycrystalline α-Mn2-xCrxO3 (x = 0.10) has been investigated isothermally at wide frequency range (4Hz-1 MHz) at different temperatures (300-390K). The dielectric spectroscopy results have been discussed in different formulism like dielectric constant, impedance and ac conductivity. The frequency dependent dielectric loss (tanδ) exhibit a clear relaxation behavior in the studied temperature range. The relaxation frequency increases with increasing temperature. These results are fitted using Arrhenius equation which suggest thermally activated process and the activation energy is 0.173±0.0024 eV. The normalized tanδ curves at different temperatures merge as a single master curve which indicate that the relaxation process follow the similar relaxation dynamics in the studied temperature range. Further, the dielectric relaxation follows non-Debye behavior. The impedance results inference that the grain boundary contribution dominate at lower frequency whereas grain contribution appeared at higher frequencies and exhibit strong temperature dependence. The ac conductivity data shows that the ac conductivity increases with increasing temperature which corroborate the semiconducting nature of the studied sample.
An independent software system for the analysis of dynamic MR images.
Torheim, G; Lombardi, M; Rinck, P A
1997-01-01
A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.
Bansal, Ravi; Liu, Jun; Gerber, Andrew J.; Goh, Suzanne; Posner, Jonathan; Colibazzi, Tiziano; Algermissen, Molly; Chiang, I-Chin; Russell, James A.; Peterson, Bradley S.
2015-01-01
The Affective Circumplex Model holds that emotions can be described as linear combinations of two underlying, independent neurophysiological systems (arousal, valence). Given research suggesting individuals with autism spectrum disorders (ASD) have difficulty processing emotions, we used the circumplex model to compare how individuals with ASD and typically-developing (TD) individuals respond to facial emotions. Participants (51 ASD, 80 TD) rated facial expressions along arousal and valence dimensions; we fitted closed, smooth, 2-dimensional curves to their ratings to examine overall circumplex contours. We modeled individual and group influences on parameters describing curve contours to identify differences in dimensional effects across groups. Significant main effects of diagnosis indicated the ASD-group’ s ratings were constricted for the entire circumplex, suggesting range constriction across all emotions. Findings did not change when covarying for overall intelligence. PMID:24234677
NASA Astrophysics Data System (ADS)
Xie, Gui-long; Zhang, Yong-hong; Huang, Shi-ping
2012-04-01
Using coarse-grained molecular dynamics simulations based on Gay-Berne potential model, we have simulated the cooling process of liquid n-butanol. A new set of GB parameters are obtained by fitting the results of density functional theory calculations. The simulations are carried out in the range of 290-50 K with temperature decrements of 10 K. The cooling characteristics are determined on the basis of the variations of the density, the potential energy and orientational order parameter with temperature, whose slopes all show discontinuity. Both the radial distribution function curves and the second-rank orientational correlation function curves exhibit splitting in the second peak. Using the discontinuous change of these thermodynamic and structure properties, we obtain the glass transition at an estimate of temperature Tg=120±10 K, which is in good agreement with experimental results 110±1 K.
NASA Astrophysics Data System (ADS)
Wei, Hui; Deng, Xiangwen; Ouyang, Shuai; Chen, Lijun; Chu, Yonghe
2017-01-01
Schima superba is an important fire-resistant, high-quality timber species in southern China. Growth in height, diameter at breast height (DBH), and volume of the three different classes (overtopped, average and dominant) of S. superba were examined in a natural subtropical forest. Four growth models (Richards, edited Weibull, Logistic and Gompertz) were selected to fit the growth of the three different classes of trees. The results showed that there was a fluctuation phenomenon in height and DBH current annual growth process of all three classes. Multiple intersections were found between current annual increment (CAI) and mean annual increment (MAI) curves of both height and DBH, but there was no intersection between volume CAI and MAI curves. All selected models could be used to fit the growth of the three classes of S. superba, with determinant coefficients above 0.9637. However, the edited Weibull model performed best with the highest R2 and the lowest root of mean square error (RMSE). S. superba is a fast-growing tree with a higher growth rate during youth. The height and DBH CAIs of overtopped, average and dominant trees reached growth peaks at ages 5-10, 10-15 and 15-20 years, respectively. According to model simulation, the volume CAIs of overtopped, average and dominant trees reached growth peaks at ages 17, 55 and 76 years, respectively. The biological rotation ages of the overtopped, average and dominant trees of S. superba were 29, 85 and 128 years, respectively.
Methodology for the AutoRegressive Planet Search (ARPS) Project
NASA Astrophysics Data System (ADS)
Feigelson, Eric; Caceres, Gabriel; ARPS Collaboration
2018-01-01
The detection of periodic signals of transiting exoplanets is often impeded by the presence of aperiodic photometric variations. This variability is intrinsic to the host star in space-based observations (typically arising from magnetic activity) and from observational conditions in ground-based observations. The most common statistical procedures to remove stellar variations are nonparametric, such as wavelet decomposition or Gaussian Processes regression. However, many stars display variability with autoregressive properties, wherein later flux values are correlated with previous ones. Providing the time series is evenly spaced, parametric autoregressive models can prove very effective. Here we present the methodology of the Autoregessive Planet Search (ARPS) project which uses Autoregressive Integrated Moving Average (ARIMA) models to treat a wide variety of stochastic short-memory processes, as well as nonstationarity. Additionally, we introduce a planet-search algorithm to detect periodic transits in the time-series residuals after application of ARIMA models. Our matched-filter algorithm, the Transit Comb Filter (TCF), replaces the traditional box-fitting step. We construct a periodogram based on the TCF to concentrate the signal of these periodic spikes. Various features of the original light curves, the ARIMA fits, the TCF periodograms, and folded light curves at peaks of the TCF periodogram can then be collected to provide constraints for planet detection. These features provide input into a multivariate classifier when a training set is available. The ARPS procedure has been applied NASA's Kepler mission observations of ~200,000 stars (Caceres, Dissertation Talk, this meeting) and will be applied in the future to other datasets.
Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.
2006-01-01
The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashiwa, B. A.
2010-12-01
Abstract A thermodynamically consistent and fully general equation–of– state (EOS) for multifield applications is described. EOS functions are derived from a Helmholtz free energy expressed as the sum of thermal (fluctuational) and collisional (condensed–phase) contributions; thus the free energy is of the Mie–Gr¨uneisen1 form. The phase–coexistence region is defined using a parameterized saturation curve by extending the form introduced by Guggenheim,2 which scales the curve relative to conditions at the critical point. We use the zero–temperature condensed–phase contribution developed by Barnes,3 which extends the Thomas–Fermi–Dirac equation to zero pressure. Thus, the functional form of the EOS could be called MGGBmore » (for Mie– Gr¨uneisen–Guggenheim–Barnes). Substance–specific parameters are obtained by fitting the low–density energy to data from the Sesame4 library; fitting the zero–temperature pressure to the Sesame cold curve; and fitting the saturation curve and latent heat to laboratory data,5 if available. When suitable coexistence data, or Sesame data, are not available, then we apply the Principle of Corresponding States.2 Thus MGGB can be thought of as a numerical recipe for rendering the tabular Sesame EOS data in an analytic form that includes a proper coexistence region, and which permits the accurate calculation of derivatives associated with compressibility, expansivity, Joule coefficient, and specific heat, all of which are required for multifield applications. 1« less
Predicting long-term graft survival in adult kidney transplant recipients.
Pinsky, Brett W; Lentine, Krista L; Ercole, Patrick R; Salvalaggio, Paolo R; Burroughs, Thomas E; Schnitzler, Mark A
2012-07-01
The ability to accurately predict a population's long-term survival has important implications for quantifying the benefits of transplantation. To identify a model that can accurately predict a kidney transplant population's long-term graft survival, we retrospectively studied the United Network of Organ Sharing data from 13,111 kidney-only transplants completed in 1988- 1989. Nineteen-year death-censored graft survival (DCGS) projections were calculated and compared with the population's actual graft survival. The projection curves were created using a two-part estimation model that (1) fits a Kaplan-Meier survival curve immediately after transplant (Part A) and (2) uses truncated observational data to model a survival function for long-term projection (Part B). Projection curves were examined using varying amounts of time to fit both parts of the model. The accuracy of the projection curve was determined by examining whether predicted survival fell within the 95% confidence interval for the 19-year Kaplan-Meier survival, and the sample size needed to detect the difference in projected versus observed survival in a clinical trial. The 19-year DCGS was 40.7% (39.8-41.6%). Excellent predictability (41.3%) can be achieved when Part A is fit for three years and Part B is projected using two additional years of data. Using less than five total years of data tended to overestimate the population's long-term survival, accurate prediction of long-term DCGS is possible, but requires attention to the quantity data used in the projection method.
FTOOLS: A FITS Data Processing and Analysis Software Package
NASA Astrophysics Data System (ADS)
Blackburn, J. Kent; Greene, Emily A.; Pence, William
1993-05-01
FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities common to high energy astrophysics data sets. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
Feasibility of Rapid Multitracer PET Tumor Imaging
NASA Astrophysics Data System (ADS)
Kadrmas, D. J.; Rust, T. C.
2005-10-01
Positron emission tomography (PET) can characterize different aspects of tumor physiology using various tracers. PET scans are usually performed using only one tracer since there is no explicit signal for distinguishing multiple tracers. We tested the feasibility of rapidly imaging multiple PET tracers using dynamic imaging techniques, where the signals from each tracer are separated based upon differences in tracer half-life, kinetics, and distribution. Time-activity curve populations for FDG, acetate, ATSM, and PTSM were simulated using appropriate compartment models, and noisy dual-tracer curves were computed by shifting and adding the single-tracer curves. Single-tracer components were then estimated from dual-tracer data using two methods: principal component analysis (PCA)-based fits of single-tracer components to multitracer data, and parallel multitracer compartment models estimating single-tracer rate parameters from multitracer time-activity curves. The PCA analysis found that there is information content present for separating multitracer data, and that tracer separability depends upon tracer kinetics, injection order and timing. Multitracer compartment modeling recovered rate parameters for individual tracers with good accuracy but somewhat higher statistical uncertainty than single-tracer results when the injection delay was >10 min. These approaches to processing rapid multitracer PET data may potentially provide a new tool for characterizing multiple aspects of tumor physiology in vivo.
Optical properties of Nd3+ doped bismuth zinc borate glasses
NASA Astrophysics Data System (ADS)
Shanmugavelu, B.; Venkatramu, V.; Ravi Kanth Kumar, V. V.
2014-03-01
Glasses with compositions of (100-x) (Bi2ZnOB2O6) - x Nd2O3 (where x = 0.1, 0.3, 0.5, 1 and 2 mol%) were prepared by melt quenching method and characterized through optical absorption, emission and decay curve measurements. Optical absorption spectra have been analyzed using Judd-Ofelt theory. The emission spectra exhibit three peaks at 919, 1063 and 1337 nm corresponding to 4F3/2 to 4I9/2, 4I11/2 and 4I13/2 transitions in the near infrared region. The emission intensity of the 4F3/2 to 4I11/2 transition increases with increase of Nd3+ concentration up to 1 mol% and then concentration quenching is observed for 2 mol% of Nd3+ concentration. The lifetimes for the 4F3/2 level are found to decrease with increase in Nd2O3 concentration in the glasses. The decay curves of the glass up to 0.3 mol% of Nd3+ exhibit single exponential nature and thereafter the curves become nonexponential nature (0.5, 1 and 2 mol%). The nonexponential curve has been fitted to the Inokuti-Hirayama model to understand the nature of energy transfer process.
Plasma breakdown in a capacitively-coupled radiofrequency argon discharge
NASA Astrophysics Data System (ADS)
Smith, H. B.; Charles, C.; Boswell, R. W.
1998-10-01
Low pressure, capacitively-coupled rf discharges are widely used in research and commercial ventures. Understanding of the non-equilibrium processes which occur in these discharges during breakdown is of interest, both for industrial applications and for a deeper understanding of fundamental plasma behaviour. The voltage required to breakdown the discharge V_brk has long been known to be a strong function of the product of the neutral gas pressure and the electrode seperation (pd). This paper investigates the dependence of V_brk on pd in rf systems using experimental, computational and analytic techniques. Experimental measurements of V_brk are made for pressures in the range 1 -- 500 mTorr and electrode separations of 2 -- 20 cm. A Paschen-style curve for breakdown in rf systems is developed which has the minimum breakdown voltage at a much smaller pd value, and breakdown voltages which are significantly lower overall, than for Paschen curves obtained from dc discharges. The differences between the two systems are explained using a simple analytic model. A Particle-in-Cell simulation is used to investigate a similar pd range and examine the effect of the secondary emission coefficient on the rf breakdown curve, particularly at low pd values. Analytic curves are fitted to both experimental and simulation results.
Optical properties of Nd3+ doped bismuth zinc borate glasses.
Shanmugavelu, B; Venkatramu, V; Ravi Kanth Kumar, V V
2014-03-25
Glasses with compositions of (100-x) (Bi2ZnOB2O6) -x Nd2O3 (where x=0.1, 0.3, 0.5, 1 and 2 mol%) were prepared by melt quenching method and characterized through optical absorption, emission and decay curve measurements. Optical absorption spectra have been analyzed using Judd-Ofelt theory. The emission spectra exhibit three peaks at 919, 1063 and 1337 nm corresponding to (4)F3/2 to (4)I9/2, (4)I11/2 and (4)I13/2 transitions in the near infrared region. The emission intensity of the (4)F3/2 to (4)I11/2 transition increases with increase of Nd(3+) concentration up to 1 mol% and then concentration quenching is observed for 2 mol% of Nd(3+) concentration. The lifetimes for the (4)F3/2 level are found to decrease with increase in Nd2O3 concentration in the glasses. The decay curves of the glass up to 0.3 mol% of Nd(3+) exhibit single exponential nature and thereafter the curves become nonexponential nature (0.5, 1 and 2 mol%). The nonexponential curve has been fitted to the Inokuti-Hirayama model to understand the nature of energy transfer process. Copyright © 2013 Elsevier B.V. All rights reserved.
Automated image segmentation-assisted flattening of atomic force microscopy images.
Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin
2018-01-01
Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.
Development and Assessment of a New Empirical Model for Predicting Full Creep Curves
Gray, Veronica; Whittaker, Mark
2015-01-01
This paper details the development and assessment of a new empirical creep model that belongs to the limited ranks of models reproducing full creep curves. The important features of the model are that it is fully standardised and is universally applicable. By standardising, the user no longer chooses functions but rather fits one set of constants only. Testing it on 7 contrasting materials, reproducing 181 creep curves we demonstrate its universality. New model and Theta Projection curves are compared to one another using an assessment tool developed within this paper. PMID:28793458
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F
In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less
Liu, Ze-bin; Cheng, Rui-mei; Xiao, Wen-fa; Guo, Quan-shui; Wang, Na
2015-04-01
The light responses of photosynthesis of two-year-old Distytum chinense seedlings subjected to a simulated reservoir flooding environment in autumn and winter seasons were measured by using a Li-6400 XT portable photosynthesis system, and the light response curves were fitted and analyzed by three models of the rectangular hyperbola, non-rectangular hyperbola and modified rectangular hyperbola to investigate the applicability of different light response models for the D. chinense in different flooding durations and the adaption regulation of light response parameters to flooding stress. The results showed that the fitting effect of the non-rectangular hyperbola model for light response process of D. chinense under normal growth condition and under short-term flooding (15 days of flooding) was better than that of the other two models, while the fitting effect of the modified rectangular hyperbola model for light response process of D. chinense under longer-term flooding (30, 45 and 60 days of flooding) was better than that of the other two models. The modified rectangular hyperbola model gave the best fitted results of light compensation point (LCP) , maximum net photosynthetic rate (P(n max)) and light saturation point (LSP), and the non-rectangular hyperbola model gave the best fitted result of dark respiration rate (R(d)). The apparent quantum yield (Φ), P(n max) and LSP of D. chinense gradually decreased, and the LCP and R(d) of D. chinense gradually increased in early flooding (30 days), but D. chinense gradually produced adaptability for flooding as the flooding duration continued to increase, and various physiological indexes were gradually stabilized. Thus, this species has adaptability to some degree to the flooding environment.
Slotnick, Scott D; Jeye, Brittany M; Dodson, Chad S
2016-01-01
Is recollection a continuous/graded process or a threshold/all-or-none process? Receiver operating characteristic (ROC) analysis can answer this question as the continuous model and the threshold model predict curved and linear recollection ROCs, respectively. As memory for plurality, an item's previous singular or plural form, is assumed to rely on recollection, the nature of recollection can be investigated by evaluating plurality memory ROCs. The present study consisted of four experiments. During encoding, words (singular or plural) or objects (single/singular or duplicate/plural) were presented. During retrieval, old items with the same plurality or different plurality were presented. For each item, participants made a confidence rating ranging from "very sure old", which was correct for same plurality items, to "very sure new", which was correct for different plurality items. Each plurality memory ROC was the proportion of same versus different plurality items classified as "old" (i.e., hits versus false alarms). Chi-squared analysis revealed that all of the plurality memory ROCs were adequately fit by the continuous unequal variance model, whereas none of the ROCs were adequately fit by the two-high threshold model. These plurality memory ROC results indicate recollection is a continuous process, which complements previous source memory and associative memory ROC findings.
Dai, Cong; Jiang, Min; Sun, Ming-Jun; Cao, Qin
2018-05-01
Fecal immunochemical test (FIT) is a promising marker for assessment of inflammatory bowel disease activity. However, the utility of FIT for predicting mucosal healing (MH) of ulcerative colitis (UC) patients has yet to be clearly demonstrated. The objective of our study was to perform a diagnostic test accuracy test meta-analysis evaluating the diagnostic accuracy of FIT in predicting MH of UC patients. We systematically searched the databases from inception to November 2017 that evaluated MH in UC. The methodological quality of each study was assessed according to the Quality Assessment of Diagnostic Accuracy Studies checklist. The extracted data were pooled using a summary receiver operating characteristic curve model. Random-effects model was used to summarize the diagnostic odds ratio, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Six studies comprising 625 UC patients were included in the meta-analysis. The pooled sensitivity and specificity values for predicting MH in UC were 0.77 (95% confidence interval [CI], 0.72-0.81) and 0.81 (95% CI, 0.76-0.85), respectively. The FIT level had a high rule-in value (positive likelihood ratio, 3.79; 95% CI, 2.85-5.03) and a moderate rule-out value (negative likelihood ratio, 0.26; 95% CI, 0.16-0.43) for predicting MH in UC. The results of the receiver operating characteristic curve analysis (area under the curve, 0.88; standard error of the mean, 0.02) and diagnostic odds ratio (18.08; 95% CI, 9.57-34.13) also revealed improved discrimination for identifying MH in UC with FIT concentration. Our meta-analysis has found that FIT is a simple, reliable non-invasive marker for predicting MH in UC patients. © 2018 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Gaudion, Sarah L; Doma, Kenji; Sinclair, Wade; Banyard, Harry G; Woods, Carl T
2017-07-01
Gaudion, SL, Doma, K, Sinclair, W, Banyard, HG, and Woods, CT. Identifying the physical fitness, anthropometric and athletic movement qualities discriminant of developmental level in elite junior Australian football: implications for the development of talent. J Strength Cond Res 31(7): 1830-1839, 2017-This study aimed to identify the physical fitness, anthropometric and athletic movement qualities discriminant of developmental level in elite junior Australian football (AF). From a total of 77 players, 2 groups were defined according to their developmental level; under 16 (U16) (n = 40, 15.6 to 15.9 years), and U18 (n = 37, 17.1 to 17.9 years). Players performed a test battery consisting of 7 physical fitness assessments, 2 anthropometric measurements, and a fundamental athletic movement assessment. A multivariate analysis of variance tested the main effect of developmental level (2 levels: U16 and U18) on the assessment criterions, whilst binary logistic regression models and receiver operating characteristic (ROC) curves were built to identify the qualities most discriminant of developmental level. A significant effect of developmental level was evident on 9 of the assessments (d = 0.27-0.88; p ≤ 0.05). However, it was a combination of body mass, dynamic vertical jump height (nondominant leg), repeat sprint time, and the score on the 20-m multistage fitness test that provided the greatest association with developmental level (Akaike's information criterion = 80.84). The ROC curve was maximized with a combined score of 180.7, successfully discriminating 89 and 60% of the U18 and U16 players, respectively (area under the curve = 79.3%). These results indicate that there are distinctive physical fitness and anthropometric qualities discriminant of developmental level within the junior AF talent pathway. Coaches should consider these differences when designing training interventions at the U16 level to assist with the development of prospective U18 AF players.
High-Temperature unfolding of a trp-Cage mini-protein: a molecular dynamics simulation study
Seshasayee, Aswin Sai Narain
2005-01-01
Background Trp cage is a recently-constructed fast-folding miniprotein. It consists of a short helix, a 3,10 helix and a C-terminal poly-proline that packs against a Trp in the alpha helix. It is known to fold within 4 ns. Results High-temperature unfolding molecular dynamics simulations of the Trp cage miniprotein have been carried out in explicit water using the OPLS-AA force-field incorporated in the program GROMACS. The radius of gyration (Rg) and Root Mean Square Deviation (RMSD) have been used as order parameters to follow the unfolding process. Distributions of Rg were used to identify ensembles. Conclusion Three ensembles could be identified. While the native-state ensemble shows an Rg distribution that is slightly skewed, the second ensemble, which is presumably the Transition State Ensemble (TSE), shows an excellent fit. The denatured ensemble shows large fluctuations, but a Gaussian curve could be fitted. This means that the unfolding process is two-state. Representative structures from each of these ensembles are presented here. PMID:15760474
Spectrum interrogation of fiber acoustic sensor based on self-fitting and differential method.
Fu, Xin; Lu, Ping; Ni, Wenjun; Liao, Hao; Wang, Shun; Liu, Deming; Zhang, Jiangshan
2017-02-20
In this article, we propose an interrogation method of fiber acoustic sensor to recover the time-domain signal from the sensor spectrum. The optical spectrum of the sensor will show a ripple waveform when responding to acoustic signal due to the scanning process in a certain wavelength range. The reason behind this phenomenon is the dynamic variation of the sensor spectrum while the intensity of different wavelength is acquired at different time in a scanning period. The frequency components can be extracted from the ripple spectrum assisted by the wavelength scanning speed. The signal is able to be recovered by differential between the ripple spectrum and its self-fitted curve. The differential process can eliminate the interference caused by environmental perturbations such as temperature or refractive index (RI), etc. The proposed method is appropriate for fiber acoustic sensors based on gratings or interferometers. A long period grating (LPG) is adopted as an acoustic sensor head to prove the feasibility of the interrogation method in experiment. The ability to compensate the environmental fluctuations is also demonstrated.
Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan Timothy; Hackenberg, Robert Errol
These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less
NASA Astrophysics Data System (ADS)
Chilenski, M. A.; Greenwald, M. J.; Hubbard, A. E.; Hughes, J. W.; Lee, J. P.; Marzouk, Y. M.; Rice, J. E.; White, A. E.
2017-12-01
It remains an open question to explain the dramatic change in intrinsic rotation induced by slight changes in electron density (White et al 2013 Phys. Plasmas 20 056106). One proposed explanation is that momentum transport is sensitive to the second derivatives of the temperature and density profiles (Lee et al 2015 Plasma Phys. Control. Fusion 57 125006), but it is widely considered to be impossible to measure these higher derivatives. In this paper, we show that it is possible to estimate second derivatives of electron density and temperature using a nonparametric regression technique known as Gaussian process regression. This technique avoids over-constraining the fit by not assuming an explicit functional form for the fitted curve. The uncertainties, obtained rigorously using Markov chain Monte Carlo sampling, are small enough that it is reasonable to explore hypotheses which depend on second derivatives. It is found that the differences in the second derivatives of n{e} and T{e} between the peaked and hollow rotation cases are rather small, suggesting that changes in the second derivatives are not likely to explain the experimental results.
NASA Astrophysics Data System (ADS)
Bălău, Oana; Bica, Doina; Koneracka, Martina; Kopčansky, Peter; Susan-Resiga, Daniela; Vékás, Ladislau
Rheological and magnetorheological behaviour of monolayer and double layer sterically stabilized magnetic fluids, with transformer oil (UTR), diloctilsebacate (DOS), heptanol (Hept), pentanol (Pent) and water (W) as carrier liquids, were investigated. The data for volumic concentration dependence of dynamic viscosity of high colloidal stability UTR, DOS, Hept and Pent samples are particularly well fitted by the formulas given by Vand (1948) and Chow (1994). The Chow type dependence proved its universal character as the viscosity data for dilution series of various magnetic fluids are well fitted by the same curve, regardless the nonpolar or polar charcater of the sample. The magnetorheological effect measured for low and medium concentration water based magnetic fluids is much higher, due to agglomerate formation process, than the corresponding values obtained for the well stabilized UTR, DOS, Hept and Pent samples, even at very high volumic fraction of magnetic nanoparticles.
TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Shi, J; Yang, Y
Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-16
... Under Secretary of Defense for Acquisition, Technology, & Logistics (USD(AT&L)), dated November 3, 2010... cost, share lines, and ceiling price. This regulation is not a ``one-size- fits-all'' mandate. However.../optimistic weighted average and ensure that their cost curves do not mirror cost-plus-fixed-fee cost curves...
Comparative Evaluation of Two Serial Gene Expression Experiments | Division of Cancer Prevention
Stuart G. Baker, 2014 Introduction This program fits biologically relevant response curves in comparative analysis of the two gene expression experiments involving same genes but under different scenarios and at least 12 responses. The program outputs gene pairs with biologically relevant response curve shapes including flat, linear, sigmoid, hockey stick, impulse and step
ERIC Educational Resources Information Center
Chien, Yu-Yi Grace
2016-01-01
The research described in this article concludes that the widely cited U-curve hypothesis is no longer supported by research data because the adjustment of international postgraduate students is a complex phenomenon that does not fit easily with attempts to define and categorize it. Methodological issues, different internal and external factors,…
Multivariate Epi-splines and Evolving Function Identification Problems
2015-04-15
such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction
Connock, Martin; Hyde, Chris; Moore, David
2011-10-01
The UK National Institute for Health and Clinical Excellence (NICE) has used its Single Technology Appraisal (STA) programme to assess several drugs for cancer. Typically, the evidence submitted by the manufacturer comes from one short-term randomized controlled trial (RCT) demonstrating improvement in overall survival and/or in delay of disease progression, and these are the pre-eminent drivers of cost effectiveness. We draw attention to key issues encountered in assessing the quality and rigour of the manufacturers' modelling of overall survival and disease progression. Our examples are two recent STAs: sorafenib (Nexavar®) for advanced hepatocellular carcinoma, and azacitidine (Vidaza®) for higher-risk myelodysplastic syndromes (MDS). The choice of parametric model had a large effect on the predicted treatment-dependent survival gain. Logarithmic models (log-Normal and log-logistic) delivered double the survival advantage that was derived from Weibull models. Both submissions selected the logarithmic fits for their base-case economic analyses and justified selection solely on Akaike Information Criterion (AIC) scores. AIC scores in the azacitidine submission failed to match the choice of the log-logistic over Weibull or exponential models, and the modelled survival in the intervention arm lacked face validity. AIC scores for sorafenib models favoured log-Normal fits; however, since there is no statistical method for comparing AIC scores, and differences may be trivial, it is generally advised that the plausibility of competing models should be tested against external data and explored in diagnostic plots. Function fitting to observed data should not be a mechanical process validated by a single crude indicator (AIC). Projective models should show clear plausibility for the patients concerned and should be consistent with other published information. Multiple rather than single parametric functions should be explored and tested with diagnostic plots. When trials have survival curves with long tails exhibiting few events then the robustness of extrapolations using information in such tails should be tested.
Fixture For Drilling And Tapping A Curved Workpiece
NASA Technical Reports Server (NTRS)
Espinosa, P. S.; Lockyer, R. T.
1992-01-01
Simple fixture guides drilling and tapping of holes in prescribed locations and orientations on workpiece having curved surface. Tool conceived for use in reworking complexly curved helicopter blades made of composite materials. Fixture is block of rigid foam with epoxy filler, custom-fitted to surface contour, containing bushings and sleeves at drilling and tapping sites. Bushings changed, so taps and drills of various sizes accommodated. In use, fixture secured to surface by hold-down bolts extending through sleeves and into threads in substrate.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
Method for Making Measurements of the Post-Combustion Residence Time in a Gas Turbine Engine
NASA Technical Reports Server (NTRS)
Miles, Jeffrey H (Inventor)
2015-01-01
A system and method of measuring a residence time in a gas-turbine engine is provided, whereby the method includes placing pressure sensors at a combustor entrance and at a turbine exit of the gas-turbine engine and measuring a combustor pressure at the combustor entrance and a turbine exit pressure at the turbine exit. The method further includes computing cross-spectrum functions between a combustor pressure sensor signal from the measured combustor pressure and a turbine exit pressure sensor signal from the measured turbine exit pressure, applying a linear curve fit to the cross-spectrum functions, and computing a post-combustion residence time from the linear curve fit.
ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION
SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.
2015-01-01
Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814
NASA Astrophysics Data System (ADS)
Suhaila, Jamaludin; Jemain, Abdul Aziz; Hamdan, Muhammad Fauzee; Wan Zin, Wan Zawiah
2011-12-01
SummaryNormally, rainfall data is collected on a daily, monthly or annual basis in the form of discrete observations. The aim of this study is to convert these rainfall values into a smooth curve or function which could be used to represent the continuous rainfall process at each region via a technique known as functional data analysis. Since rainfall data shows a periodic pattern in each region, the Fourier basis is introduced to capture these variations. Eleven basis functions with five harmonics are used to describe the unimodal rainfall pattern for stations in the East while five basis functions which represent two harmonics are needed to describe the rainfall pattern in the West. Based on the fitted smooth curve, the wet and dry periods as well as the maximum and minimum rainfall values could be determined. Different rainfall patterns are observed among the studied regions based on the smooth curve. Using the functional analysis of variance, the test results indicated that there exist significant differences in the functional means between each region. The largest differences in the functional means are found between the East and Northwest regions and these differences may probably be due to the effect of topography and, geographical location and are mostly influenced by the monsoons. Therefore, the same inputs or approaches might not be useful in modeling the hydrological process for different regions.
Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.
Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui
2018-01-13
Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
2013-01-01
Background Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists’ capacity to use these immunoassays to evaluate human clinical trials. Results The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose–response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Conclusions Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. PMID:23631706
Eckels, Josh; Nathe, Cory; Nelson, Elizabeth K; Shoemaker, Sara G; Nostrand, Elizabeth Van; Yates, Nicole L; Ashley, Vicki C; Harris, Linda J; Bollenbeck, Mark; Fong, Youyi; Tomaras, Georgia D; Piehler, Britt
2013-04-30
Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists' capacity to use these immunoassays to evaluate human clinical trials. The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose-response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license.
NASA Astrophysics Data System (ADS)
Brandt, Adam Robert
This dissertation explores the environmental and economic impacts of the transition to hydrocarbon substitutes for conventional petroleum (SCPs). First, mathematical models of oil depletion are reviewed, including the Hubbert model, curve-fitting methods, simulation models, and economic models. The benefits and drawbacks of each method are outlined. I discuss the predictive value of the models and our ability to determine if one model type works best. I argue that forecasting oil depletion without also including substitution with SCPs results in unrealistic projections of future energy supply. I next use information theoretic techniques to test the Hubbert model of oil depletion against five other asymmetric and symmetric curve-fitting models using data from 139 oil producing regions. I also test the assumptions that production curves are symmetric and that production is more bell-shaped in larger regions. Results show that if symmetry is enforced, Gaussian production curves perform best, while if asymmetry is allowed, asymmetric exponential models prove most useful. I also find strong evidence for asymmetry: production declines are consistently less steep than inclines. In order to understand the impacts of oil depletion on GHG emissions, I developed the Regional Optimization Model for Emissions from Oil Substitutes (ROMEO). ROMEO is an economic optimization model of investment and production of fuels. Results indicate that incremental emissions (with demand held constant) from SCPs could be 5-20 GtC over the next 50 years. These results are sensitive to the endowment of conventional oil and not sensitive to a carbon tax. If demand can vary, total emissions could decline under a transition because the higher cost of SCPs lessens overall fuel consumption. Lastly, I study the energetic and environmental characteristics of the in situ conversion process, which utilizes electricity to generate liquid hydrocarbons from oil shale. I model the energy inputs and outputs from the ICP use them to calculate the GHG emissions from the ICP. Energy outputs (as refined liquid fuel) range from 1.2 to 1.6 times the total primary energy inputs. Well-to-tank greenhouse gas emissions range from 30.6 to 37.1 gCeq./MJ of final fuel delivered, 21 to 47% larger than those from conventionally produced petroleum-based fuels.
Light-curve modelling constraints on the obliquities and aspect angles of the young Fermi pulsars
NASA Astrophysics Data System (ADS)
Pierbattista, M.; Harding, A. K.; Grenier, I. A.; Johnson, T. J.; Caraveo, P. A.; Kerr, M.; Gonthier, P. L.
2015-03-01
In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed γ-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity α and of the line of sight angle ζ, yielding estimates of the radiation beaming factor and radiated luminosity. Using different γ-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit γ-ray light curves for 76 young or middle-aged pulsars and we jointly fit their γ-ray plus radio light curves when possible. We find that a joint radio plus γ-ray fit strategy is important to obtain (α,ζ) estimates that can explain simultaneously detectable radio and γ-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (α,ζ) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the γ-ray only fit leads to underestimated α or ζ when the solution is found to the left or to the right of the main α-ζ plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favoured in explaining the observations. We find no apparent evolution of α on a time scale of 106 years. For all emission geometries our derived γ-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from radio-quiet to radio-loud solutions. For all models, the correlation between γ-ray luminosity and spin-down power is consistent with a square root dependence. The γ-ray luminosities obtained by using the beaming factors estimated in the framework of each model do not exceed the spin-down power. This suggests that assuming a beaming factor of one for all objects, as done in other studies, likely overestimates the real values. The data show a relation between the pulsar spectral characteristics and the width of the accelerator gap. The relation obtained in the case of the Slot Gap model is consistent with the theoretical prediction. Appendices are available in electronic form at http://www.aanda.org
Light-curve modelling constraints on the obliquities and aspect angles of the young Fermi pulsars
Pierbattista, M.; Harding, A. K.; Grenier, I. A.; ...
2015-02-10
In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed γ-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity α and of the line of sight angle ζ, yielding estimates of the radiation beaming factor and radiated luminosity. Using different γ-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit γ-ray light curves formore » 76 young or middle-aged pulsars and we jointly fit their γ-ray plus radio light curves when possible. We find that a joint radio plus γ-ray fit strategy is important to obtain (α,ζ) estimates that can explain simultaneously detectable radio and γ-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (α,ζ) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the γ-ray only fit leads to underestimated α or ζ when the solution is found to the left or to the right of the main α-ζ plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favoured in explaining the observations. We find no apparent evolution of α on a time scale of 106 years. For all emission geometries our derived γ-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from radio-quiet to radio-loud solutions. For all models, the correlation between γ-ray luminosity and spin-down power is consistent with a square root dependence. The γ-ray luminosities obtained by using the beaming factors estimated in the framework of each model do not exceed the spin-down power. This suggests that assuming a beaming factor of one for all objects, as done in other studies, likely overestimates the real values. The data show a relation between the pulsar spectral characteristics and the width of the accelerator gap. Furthermore, the relation obtained in the case of the Slot Gap model is consistent with the theoretical prediction.« less
Light-Curve Modelling Constraints on the Obliquities and Aspect Angles of the Young Fermi Pulsars
NASA Technical Reports Server (NTRS)
Pierbattista, M.; Harding, A. K.; Grenier, I. A.; Johnson, T. J.; Caraveo, P. A.; Kerr, M.; Gonthier, P. L.
2015-01-01
In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed gamma-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity alpha and of the line of sight angle zeta, yielding estimates of the radiation beaming factor and radiated luminosity. Using different gamma-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit gamma-ray light curves for 76 young or middle-aged pulsars and we jointly fit their gamma-ray plus radio light curves when possible. We find that a joint radio plus gamma-ray fit strategy is important to obtain (alpha, zeta) estimates that can explain simultaneously detectable radio and gamma-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (alpha, gamma) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the gamma-ray only fit leads to underestimated alpha or zeta when the solution is found to the left or to the right of the main alpha-zeta plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favored in explaining the observations. We find no apparent evolution of a on a time scale of 106 years. For all emission geometries our derived gamma-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from radio-quiet to radio-loud solutions. For all models, the correlation between gamma-ray luminosity and spin-down power is consistent with a square root dependence. The gamma-ray luminosities obtained by using the beaming factors estimated in the framework of each model do not exceed the spin-down power. This suggests that assuming a beaming factor of one for all objects, as done in other studies, likely overestimates the real values. The data show a relation between the pulsar spectral characteristics and the width of the accelerator gap. The relation obtained in the case of the Slot Gap model is consistent with the theoretical prediction.
Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant
2015-01-01
In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088
Evapotranspiration Controls Imposed by Soil Moisture: A Spatial Analysis across the United States
NASA Astrophysics Data System (ADS)
Rigden, A. J.; Tuttle, S. E.; Salvucci, G.
2014-12-01
We spatially analyze the control over evapotranspiration (ET) imposed by soil moisture across the United States using daily estimates of satellite-derived soil moisture and data-driven ET over a nine-year period (June 2002-June 2011) at 305 locations. The soil moisture data are developed using 0.25-degree resolution satellite observations from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), where the 9-year time series for each 0.25-degree pixel was selected from three potential algorithms (VUA-NASA, U. Montana, & NASA) based on the maximum mutual information between soil moisture and precipitation (Tuttle & Salvucci (2014), Remote Sens Environ, 114: 207-222). The ET data are developed independent of soil moisture using an emergent relationship between the diurnal cycle of the relative humidity profile and ET. The emergent relation is that the vertical variance of the relative humidity profile is less than what would occur for increased or decreased ET rates, suggesting that land-atmosphere feedback processes minimize this variance (Salvucci and Gentine (2013), PNAS, 110(16): 6287-6291). The key advantage of using this approach to estimate ET is that no measurements of surface limiting factors (soil moisture, leaf area, canopy conductance) are required; instead, ET is estimated from meteorological data measured at 305 common weather stations that are approximately uniformly distributed across the United States. The combination of these two independent datasets allows for a unique spatial analysis of the control on ET imposed by the availability of soil moisture. We fit evaporation efficiency curves across the United States at each of the 305 sites during the summertime (May-June-July-August-September). Spatial patterns are visualized by mapping optimal curve fitting coefficients across the Unites States. An analysis of efficiency curves and their spatial patterns will be presented.
REFLECTED LIGHT CURVES, SPHERICAL AND BOND ALBEDOS OF JUPITER- AND SATURN-LIKE EXOPLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyudina, Ulyana; Kopparla, Pushkar; Ingersoll, Andrew P.
Reflected light curves observed for exoplanets indicate that a few of them host bright clouds. We estimate how the light curve and total stellar heating of a planet depends on forward and backward scattering in the clouds based on Pioneer and Cassini spacecraft images of Jupiter and Saturn. We fit analytical functions to the local reflected brightnesses of Jupiter and Saturn depending on the planet’s phase. These observations cover broadbands at 0.59–0.72 and 0.39–0.5 μ m, and narrowbands at 0.938 (atmospheric window), 0.889 (CH4 absorption band), and 0.24–0.28 μ m. We simulate the images of the planets with a ray-tracingmore » model, and disk-integrate them to produce the full-orbit light curves. For Jupiter, we also fit the modeled light curves to the observed full-disk brightness. We derive spherical albedos for Jupiter and Saturn, and for planets with Lambertian and Rayleigh-scattering atmospheres. Jupiter-like atmospheres can produce light curves that are a factor of two fainter at half-phase than the Lambertian planet, given the same geometric albedo at transit. The spherical albedo is typically lower than for a Lambertian planet by up to a factor of ∼1.5. The Lambertian assumption will underestimate the absorption of the stellar light and the equilibrium temperature of the planetary atmosphere. We also compare our light curves with the light curves of solid bodies: the moons Enceladus and Callisto. Their strong backscattering peak within a few degrees of opposition (secondary eclipse) can lead to an even stronger underestimate of the stellar heating.« less
Shu-Jiang, Liu; Zhan-Ying, Chen; Yin-Zhong, Chang; Shi-Lian, Wang; Qi, Li; Yuan-Qing, Fan
2013-10-11
Multidimensional gas chromatography is widely applied to atmospheric xenon monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). To improve the capability for xenon sampling from the atmosphere, sampling techniques have been investigated in detail. The sampling techniques are designed by xenon outflow curves which are influenced by many factors, and the injecting condition is one of the key factors that could influence the xenon outflow curves. In this paper, the xenon outflow curves of single-pulse injection in two-dimensional gas chromatography has been tested and fitted as a function of exponential modified Gaussian distribution. An inference formula of the xenon outflow curve for six-pulse injection is derived, and the inference formula is also tested to compare with its fitting formula of the xenon outflow curve. As a result, the curves of both the one-pulse and six-pulse injections obey the exponential modified Gaussian distribution when the temperature of the activated carbon column's temperature is 26°C and the flow rate of the carrier gas is 35.6mLmin(-1). The retention time of the xenon peak for one-pulse injection is 215min, and the peak width is 138min. For the six-pulse injection, however, the retention time is delayed to 255min, and the peak width broadens to 222min. According to the inferred formula of the xenon outflow curve for the six-pulse injection, the inferred retention time is 243min, the relative deviation of the retention time is 4.7%, and the inferred peak width is 225min, with a relative deviation of 1.3%. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mandel, Kaisey; Scolnic, Daniel; Shariff, Hikmatali; Foley, Ryan; Kirshner, Robert
2017-01-01
Inferring peak optical absolute magnitudes of Type Ia supernovae (SN Ia) from distance-independent measures such as their light curve shapes and colors underpins the evidence for cosmic acceleration. SN Ia with broader, slower declining optical light curves are more luminous (“broader-brighter”) and those with redder colors are dimmer. But the “redder-dimmer” color-luminosity relation widely used in cosmological SN Ia analyses confounds its two separate physical origins. An intrinsic correlation arises from the physics of exploding white dwarfs, while interstellar dust in the host galaxy also makes SN Ia appear dimmer and redder. Conventional SN Ia cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (MB vs. B-V) slope βint differs from the host galaxy dust law RB, this convolution results in a specific curve of mean extinguished absolute magnitude vs. apparent color. The derivative of this curve smoothly transitions from βint in the blue tail to RB in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope βapp between βint and RB. We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a dataset of SALT2 optical light curve fits of 277 nearby SN Ia at z < 0.10. The conventional linear fit obtains βapp ≈ 3. Our model finds a βint = 2.2 ± 0.3 and a distinct dust law of RB = 3.7 ± 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ~0.10 mag in the tails of the apparent color distribution. This research is supported by NSF grants AST-156854, AST-1211196, and NASA grant NNX15AJ55G.
Lightweight Forms for Epoxy/Aramid Ducts
NASA Technical Reports Server (NTRS)
Mix, E. W.; Anderson, A. N.; Bedford, Donald L., Sr.
1986-01-01
Aluminum mandrels easy to remove. Lightweight aluminum mandrel for shaping epoxy/aramid ducts simplifies and speeds production. In new process, glass-reinforced epoxy/aramid cloth wrapped on aluminum mandrel. Stainless-steel flanges and other hardware fitted on duct and held by simple tooling. Entire assembly placed in oven to cure epoxy. After curing, assembly placed in alkaline bath dissolves aluminum mandrel in about 4 hours. Epoxy/aramid shell ready for use as duct. Aluminum mandrel used to make ducts of various inside diameters up to 6 in. Standard aluminum forms used. Conventional tube-bending equipment produces requisite curves in mandrels.
Limb-darkening and the structure of the Jovian atmosphere
NASA Technical Reports Server (NTRS)
Newman, W. I.; Sagan, C.
1978-01-01
By observing the transit of various cloud features across the Jovian disk, limb-darkening curves were constructed for three regions in the 4.6 to 5.1 mu cm band. Several models currently employed in describing the radiative or dynamical properties of planetary atmospheres are here examined to understand their implications for limb-darkening. The statistical problem of fitting these models to the observed data is reviewed and methods for applying multiple regression analysis are discussed. Analysis of variance techniques are introduced to test the viability of a given physical process as a cause of the observed limb-darkening.
The S-curve for forecasting waste generation in construction projects.
Lu, Weisheng; Peng, Yi; Chen, Xi; Skitmore, Martin; Zhang, Xiaoling
2016-10-01
Forecasting construction waste generation is the yardstick of any effort by policy-makers, researchers, practitioners and the like to manage construction and demolition (C&D) waste. This paper develops and tests an S-curve model to indicate accumulative waste generation as a project progresses. Using 37,148 disposal records generated from 138 building projects in Hong Kong in four consecutive years from January 2011 to June 2015, a wide range of potential S-curve models are examined, and as a result, the formula that best fits the historical data set is found. The S-curve model is then further linked to project characteristics using artificial neural networks (ANNs) so that it can be used to forecast waste generation in future construction projects. It was found that, among the S-curve models, cumulative logistic distribution is the best formula to fit the historical data. Meanwhile, contract sum, location, public-private nature, and duration can be used to forecast construction waste generation. The study provides contractors with not only an S-curve model to forecast overall waste generation before a project commences, but also with a detailed baseline to benchmark and manage waste during the course of construction. The major contribution of this paper is to the body of knowledge in the field of construction waste generation forecasting. By examining it with an S-curve model, the study elevates construction waste management to a level equivalent to project cost management where the model has already been readily accepted as a standard tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
Estimating thermal performance curves from repeated field observations
Childress, Evan; Letcher, Benjamin H.
2017-01-01
Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.
ERIC Educational Resources Information Center
Sinharay, Sandip
2017-01-01
Karabatsos compared the power of 36 person-fit statistics using receiver operating characteristics curves and found the "H[superscript T]" statistic to be the most powerful in identifying aberrant examinees. He found three statistics, "C", "MCI", and "U3", to be the next most powerful. These four statistics,…
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
Determination of time of death in forensic science via a 3-D whole body heat transfer model.
Bartgis, Catherine; LeBrun, Alexander M; Ma, Ronghui; Zhu, Liang
2016-12-01
This study is focused on developing a whole body heat transfer model to accurately simulate temperature decay in a body postmortem. The initial steady state temperature field is simulated first and the calculated weighted average body temperature is used to determine the overall heat transfer coefficient at the skin surface, based on thermal equilibrium before death. The transient temperature field postmortem is then simulated using the same boundary condition and the temperature decay curves at several body locations are generated for a time frame of 24h. For practical purposes, curve fitting techniques are used to replace the simulations with a proposed exponential formula with an initial time delay. It is shown that the obtained temperature field in the human body agrees very well with that in the literature. The proposed exponential formula provides an excellent fit with an R 2 value larger than 0.998. For the brain and internal organ sites, the initial time delay varies from 1.6 to 2.9h, when the temperature at the measuring site does not change significantly from its original value. The curve-fitted time constant provides the measurement window after death to be between 8h and 31h if the brain site is used, while it increases 60-95% at the internal organ site. The time constant is larger when the body is exposed to colder air, since a person usually wears more clothing when it is cold outside to keep the body warm and comfortable. We conclude that a one-size-fits-all approach would lead to incorrect estimation of time of death and it is crucial to generate a database of cooling curves taking into consideration all the important factors such as body size and shape, environmental conditions, etc., therefore, leading to accurate determination of time of death. Copyright © 2016 Elsevier Ltd. All rights reserved.
A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pejcha, Ondřej; Prieto, Jose L., E-mail: pejcha@astro.princeton.edu
2015-02-01
We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles resultmore » in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.« less
Pressure effects on the relaxation of an excited nitromethane molecule in an argon bath
NASA Astrophysics Data System (ADS)
Rivera-Rivera, Luis A.; Wagner, Albert F.; Sewell, Thomas D.; Thompson, Donald L.
2015-01-01
Classical molecular dynamics simulations were performed to study the relaxation of nitromethane in an Ar bath (of 1000 atoms) at 300 K and pressures 10, 50, 75, 100, 125, 150, 300, and 400 atm. The molecule was instantaneously excited by statistically distributing 50 kcal/mol among the internal degrees of freedom. At each pressure, 1000 trajectories were integrated for 1000 ps, except for 10 atm, for which the integration time was 5000 ps. The computed ensemble-averaged rotational energy decay is ˜100 times faster than the vibrational energy decay. Both rotational and vibrational decay curves can be satisfactorily fit with the Lendvay-Schatz function, which involves two parameters: one for the initial rate and one for the curvature of the decay curve. The decay curves for all pressures exhibit positive curvature implying the rate slows as the molecule loses energy. The initial rotational relaxation rate is directly proportional to density over the interval of simulated densities, but the initial vibrational relaxation rate decreases with increasing density relative to the extrapolation of the limiting low-pressure proportionality to density. The initial vibrational relaxation rate and curvature are fit as functions of density. For the initial vibrational relaxation rate, the functional form of the fit arises from a combinatorial model for the frequency of nitromethane "simultaneously" colliding with multiple Ar atoms. Roll-off of the initial rate from its low-density extrapolation occurs because the cross section for collision events with L Ar atoms increases with L more slowly than L times the cross section for collision events with one Ar atom. The resulting density-dependent functions of the initial rate and curvature represent, reasonably well, all the vibrational decay curves except at the lowest density for which the functions overestimate the rate of decay. The decay over all gas phase densities is predicted by extrapolating the fits to condensed-phase densities.
Pressure effects on the relaxation of an excited nitromethane molecule in an argon bath
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera-Rivera, Luis A.; Wagner, Albert F.; Sewell, Thomas D.
2015-01-07
Classical molecular dynamics simulations were performed to study the relaxation of nitromethane in an Ar bath (of 1000 atoms) at 300 K and pressures 10, 50, 75, 100, 125, 150, 300, and 400 atm. The molecule was instantaneously excited by statistically distributing 50 kcal/mol among the internal degrees of freedom. At each pressure, 1000 trajectories were integrated for 1000 ps, except for 10 atm, for which the integration time was 5000 ps. The computed ensemble-averaged rotational energy decay is similar to 100 times faster than the vibrational energy decay. Both rotational and vibrational decay curves can be satisfactorily fit withmore » the Lendvay-Schatz function, which involves two parameters: one for the initial rate and one for the curvature of the decay curve. The decay curves for all pressures exhibit positive curvature implying the rate slows as the molecule loses energy. The initial rotational relaxation rate is directly proportional to density over the interval of simulated densities, but the initial vibrational relaxation rate decreases with increasing density relative to the extrapolation of the limiting low-pressure proportionality to density. The initial vibrational relaxation rate and curvature are fit as functions of density. For the initial vibrational relaxation rate, the functional form of the fit arises from a combinatorial model for the frequency of nitromethane "simultaneously" colliding with multiple Ar atoms. Roll-off of the initial rate from its low-density extrapolation occurs because the cross section for collision events with L Ar atoms increases with L more slowly than L times the cross section for collision events with one Ar atom. The resulting density-dependent functions of the initial rate and curvature represent, reasonably well, all the vibrational decay curves except at the lowest density for which the functions overestimate the rate of decay. The decay over all gas phase densities is predicted by extrapolating the fits to condensed-phase densities. (C) 2015 AIP Publishing LLC.« less
Pressure effects on the relaxation of an excited nitromethane molecule in an argon bath.
Rivera-Rivera, Luis A; Wagner, Albert F; Sewell, Thomas D; Thompson, Donald L
2015-01-07
Classical molecular dynamics simulations were performed to study the relaxation of nitromethane in an Ar bath (of 1000 atoms) at 300 K and pressures 10, 50, 75, 100, 125, 150, 300, and 400 atm. The molecule was instantaneously excited by statistically distributing 50 kcal/mol among the internal degrees of freedom. At each pressure, 1000 trajectories were integrated for 1000 ps, except for 10 atm, for which the integration time was 5000 ps. The computed ensemble-averaged rotational energy decay is ∼100 times faster than the vibrational energy decay. Both rotational and vibrational decay curves can be satisfactorily fit with the Lendvay-Schatz function, which involves two parameters: one for the initial rate and one for the curvature of the decay curve. The decay curves for all pressures exhibit positive curvature implying the rate slows as the molecule loses energy. The initial rotational relaxation rate is directly proportional to density over the interval of simulated densities, but the initial vibrational relaxation rate decreases with increasing density relative to the extrapolation of the limiting low-pressure proportionality to density. The initial vibrational relaxation rate and curvature are fit as functions of density. For the initial vibrational relaxation rate, the functional form of the fit arises from a combinatorial model for the frequency of nitromethane "simultaneously" colliding with multiple Ar atoms. Roll-off of the initial rate from its low-density extrapolation occurs because the cross section for collision events with L Ar atoms increases with L more slowly than L times the cross section for collision events with one Ar atom. The resulting density-dependent functions of the initial rate and curvature represent, reasonably well, all the vibrational decay curves except at the lowest density for which the functions overestimate the rate of decay. The decay over all gas phase densities is predicted by extrapolating the fits to condensed-phase densities.
Pressure effects on the relaxation of an excited nitromethane molecule in an argon bath
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera-Rivera, Luis A.; Sewell, Thomas D.; Thompson, Donald L.
2015-01-07
Classical molecular dynamics simulations were performed to study the relaxation of nitromethane in an Ar bath (of 1000 atoms) at 300 K and pressures 10, 50, 75, 100, 125, 150, 300, and 400 atm. The molecule was instantaneously excited by statistically distributing 50 kcal/mol among the internal degrees of freedom. At each pressure, 1000 trajectories were integrated for 1000 ps, except for 10 atm, for which the integration time was 5000 ps. The computed ensemble-averaged rotational energy decay is ∼100 times faster than the vibrational energy decay. Both rotational and vibrational decay curves can be satisfactorily fit with the Lendvay-Schatzmore » function, which involves two parameters: one for the initial rate and one for the curvature of the decay curve. The decay curves for all pressures exhibit positive curvature implying the rate slows as the molecule loses energy. The initial rotational relaxation rate is directly proportional to density over the interval of simulated densities, but the initial vibrational relaxation rate decreases with increasing density relative to the extrapolation of the limiting low-pressure proportionality to density. The initial vibrational relaxation rate and curvature are fit as functions of density. For the initial vibrational relaxation rate, the functional form of the fit arises from a combinatorial model for the frequency of nitromethane “simultaneously” colliding with multiple Ar atoms. Roll-off of the initial rate from its low-density extrapolation occurs because the cross section for collision events with L Ar atoms increases with L more slowly than L times the cross section for collision events with one Ar atom. The resulting density-dependent functions of the initial rate and curvature represent, reasonably well, all the vibrational decay curves except at the lowest density for which the functions overestimate the rate of decay. The decay over all gas phase densities is predicted by extrapolating the fits to condensed-phase densities.« less
NASA Astrophysics Data System (ADS)
Kurian, Jessyamma; Mathew, M. Jacob
2018-04-01
In this paper we report the structural, optical and magnetic studies of three spinel ferrites namely CuFe2O4, MgFe2O4 and ZnFe2O4 prepared in an autoclave under the same physical conditions but with two different liquid medium and different surfactant. We use water as the medium and trisodium citrate as the surfactant for one method (Hydrothermal method) and ethylene glycol as the medium and poly ethylene glycol as the surfactant for the second method (solvothermal method). The phase identification and structural characterization are done using XRD and morphological studies are carried out by TEM. Cubical and porous spherical morphologies are obtained for hydrothermal and solvothermal process respectively without any impurity phase. The optical studies are carried out using FTIR and UV-Vis reflectance spectra. In order to elucidate the nonlinear optical behaviour of the prepared nanomaterial, open aperture z-scan technique is used. From the fitted z-scan curves nonlinear absorption coefficient and the saturation intensity are determined. The magnetic characterization of the samples is performed at room temperature using vibrating sample magnetometer measurements. The M-H curves obtained are fitted using theoretical equation and the different components of magnetization are determined. Nanoparticles with high saturation magnetization are obtained for MgFe2O4 and ZnFe2O4 prepared under solvothermal reaction. The magnetic hyperfine parameters and the cation distribution of the prepared materials are determined using room temperature Mössbauer spectroscopy. The fitted spectra reveal the difference in the magnetic hyperfine parameters owing to the change in size and morphology.
Effect of layer thickness on device response of silicon heavily supersaturated with sulfur
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, David; Department of Physics and Nuclear Engineering, United States Military Academy, West Point NY 10996; Mathews, Jay
2016-05-15
We report on a simple experiment in which the thickness of a hyperdoped silicon layer, supersaturated with sulfur by ion implantation followed by pulsed laser melting and rapid solidification, is systematically varied at constant average sulfur concentration, by varying the implantation energy, dose, and laser fluence. Contacts are deposited and the external quantum efficiency (EQE) is measured for visible wavelengths. We posit that the sulfur layer primarily absorbs light but contributes negligible photocurrent, and we seek to support this by analyzing the EQE data for the different layer thicknesses in two interlocking ways. In the first, we use the measuredmore » concentration depth profiles to obtain the approximate layer thicknesses, and, for each wavelength, fit the EQE vs. layer thickness curve to obtain the absorption coefficient of hyperdoped silicon for that wavelength. Comparison to literature values for the hyperdoped silicon absorption coefficients [S.H. Pan et al. Applied Physics Letters 98, 121913 (2011)] shows good agreement. Next, we essentially run this process in reverse; we fit with Beer’s law the curves of EQE vs. hyperdoped silicon absorption coefficient for those wavelengths that are primarily absorbed in the hyperdoped silicon layer, and find that the layer thicknesses obtained from the fit are in good agreement with the original values obtained from the depth profiles. We conclude that the data support our interpretation of the hyperdoped silicon layer as providing negligible photocurrent at high S concentrations. This work validates the absorption data of Pan et al. [Applied Physics Letters 98, 121913 (2011)], and is consistent with reports of short mobility-lifetime products in hyperdoped layers. It suggests that for optoelectronic devices containing hyperdoped layers, the most important contribution to the above band gap photoresponse may be due to photons absorbed below the hyperdoped layer.« less
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.
1985-01-01
A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.
Fitness Trends and Disparities Among School-Aged Children in Georgia, 2011-2014.
Bai, Yang; Saint-Maurice, Pedro F; Welk, Gregory J
Although FitnessGram fitness data on aerobic capacity and body mass index (BMI) have been collected in public schools in Georgia since the 2011-2012 school year, the data have not been analyzed. The primary objective of our study was to use these data to assess changes in fitness among school-aged children in Georgia between 2011 and 2014. A secondary objective was to determine if student fitness differed by school size and socioeconomic characteristics. FitnessGram classifies fitness into the Healthy Fitness Zone (HFZ) or not within the HFZ for aerobic capacity and BMI. We used data for 3 successive school years (ie, 2011-2012 to 2013-2014) obtained from FitnessGram testing of students in >1600 schools. We calculated the percentage of students who achieved the HFZ for aerobic capacity and BMI. We used growth curve models to estimate the annual changes in these proportions, and we determined the effect of school size and socioeconomic status on these changes. Both elementary school boys (β = 1.31%, standard error [SE] = 0.23%, P < .001) and girls (β = 1.53%, SE = 0.26%, P < .001) had significant annual increases in achievement of HFZ for aerobic capacity. Elementary school boys (β = 3.11%, SE = 0.32%, P < .001) and girls (β = 3.09%, SE = 0.32%, P < .001) also had significant increases in their BMI HFZ achievement proportions, although these increases occurred primarily between 2011-2012 and 2012-2013. Body mass index HFZ achievement proportions were mixed for middle school students, and we did not observe increases for high school students. Larger school size and higher school socioeconomic status were associated with better aerobic capacity and BMI fitness profiles. Surveillance results such as these may help inform the process of designing state and local school-based fitness promotion and public health programs and tracking the results of those programs.
Collisional Processes Probed by using Resonant Four-Wave Mixing Spectroscopy
NASA Astrophysics Data System (ADS)
McCormack, E. F.; Stampanoni, A.; Hemmerling, B.
2000-06-01
Collisionally-induced decay processes in excited-state nitric oxide (NO) have been measured by using time-resolved two-color, resonant four-wave mixing (TC-RFWM) spectroscopy and polarization spectroscopy (PS). Markedly different time dependencies were observed in the data obtained by using TC-RFWM when compared to PS. Oscillations in the PS signal as a function of delay between the pump and probe laser pulses were observed and it was determined that their characteristics depend very sensitively on laser polarization. Analysis reveals that the oscillations in the decay curves are due to coherent excitation of unresolved hyperfine structure in the A state of NO. A comparison of beat frequencies obtained by taking Fourier transforms of the time data to the predicted hyperfine structure of the A state support this explanation. Further, based on a time-dependent model of PS as a FWM process, the signal’s dependence as a function of time on polarization configuration and excitation scheme can be predicted. By using the beat frequency values, fits of the model results to experimental decay curves for different pressures allows a study of the quenching rate in the A state due to collisional processes. A comparison of the PS data to laser-induced fluorescence decay measurements reveals different decay rates which suggests that the PS signal decay depends on the orientation and alignment of the excited molecules. The different behavior of the decay curves obtained by using TC-RFWM and PS can be understood in terms of the various contributions to the decay as described by the model and this has a direct bearing on which technique is preferable for a given set of experimental parameters.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P
2009-12-01
For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0.880, 0.994, and 0.856, respectively. For MR(glc), these correlations were 0.963, 0.994, and 0.966, respectively. In response monitoring, these correlations were 0.947, 0.982, and 0.949, respectively. Additional scaling by 1 late arterial sample showed a significant improvement (P < 0.001). The fitted input function calibrated for AA and iDV performed similarly to IDIF. Performance improved significantly using 1 late arterial sample. The proposed model can be used when an IDIF is not available or when serial arterial sampling is not feasible.
Planned Missing Designs to Optimize the Efficiency of Latent Growth Parameter Estimates
ERIC Educational Resources Information Center
Rhemtulla, Mijke; Jia, Fan; Wu, Wei; Little, Todd D.
2014-01-01
We examine the performance of planned missing (PM) designs for correlated latent growth curve models. Using simulated data from a model where latent growth curves are fitted to two constructs over five time points, we apply three kinds of planned missingness. The first is item-level planned missingness using a three-form design at each wave such…
ERIC Educational Resources Information Center
Sun, Yan; Strobel, Johannes; Newby, Timothy J.
2017-01-01
Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…
Observing globular cluster RR Lyraes with the BYU West Mountain Observatory
NASA Astrophysics Data System (ADS)
Jeffery, E. J.; Joner, M. D.; Walton, R. S.
2016-05-01
We have utilized the 0.9-meter telescope of the Brigham Young University West Mountain Observatory to secure data on six northern hemi- sphere globular clusters. Here we present observations of RR Lyrae stars located in these clusters. We compare light curves produced using both DAOPHOT and ISIS software packages. Light curve fitting is done with FITLC.
ERIC Educational Resources Information Center
Lazar, Ann A.; Zerbe, Gary O.
2011-01-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
Comparison of Three Methods for Wind Turbine Capacity Factor Estimation
Ditkovich, Y.; Kuperman, A.
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755
An Algorithm for Protein Helix Assignment Using Helix Geometry
Cao, Chen; Xu, Shutan; Wang, Lincong
2015-01-01
Helices are one of the most common and were among the earliest recognized secondary structure elements in proteins. The assignment of helices in a protein underlies the analysis of its structure and function. Though the mathematical expression for a helical curve is simple, no previous assignment programs have used a genuine helical curve as a model for helix assignment. In this paper we present a two-step assignment algorithm. The first step searches for a series of bona fide helical curves each one best fits the coordinates of four successive backbone Cα atoms. The second step uses the best fit helical curves as input to make helix assignment. The application to the protein structures in the PDB (protein data bank) proves that the algorithm is able to assign accurately not only regular α-helix but also 310 and π helices as well as their left-handed versions. One salient feature of the algorithm is that the assigned helices are structurally more uniform than those by the previous programs. The structural uniformity should be useful for protein structure classification and prediction while the accurate assignment of a helix to a particular type underlies structure-function relationship in proteins. PMID:26132394
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaczmarski, Krzysztof; Guiochon, Georges A
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventionalmore » procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.« less
Color difference threshold determination for acrylic denture base resins.
Ren, Jiabao; Lin, Hong; Huang, Qingmei; Liang, Qifan; Zheng, Gang
2015-01-01
This study aimed to set evaluation indicators, i.e., perceptibility and acceptability color difference thresholds, of color stability for acrylic denture base resins for a spectrophotometric assessing method, which offered an alternative to the visual method described in ISO 20795-1:2013. A total of 291 disk specimens 50±1 mm in diameter and 0.5±0.1 mm thick were prepared (ISO 20795-1:2013) and processed through radiation tests in an accelerated aging chamber (ISO 7491:2000) for increasing times of 0 to 42 hours. Color alterations were measured with a spectrophotometer and evaluated using the CIE L*a*b* colorimetric system. Color differences were calculated through the CIEDE2000 color difference formula. Thirty-two dental professionals without color vision deficiencies completed perceptibility and acceptability assessments under controlled conditions in vitro. An S-curve fitting procedure was used to analyze the 50:50% perceptibility and acceptability thresholds. Furthermore, perceptibility and acceptability against the differences of the three color attributes, lightness, chroma, and hue, were also investigated. According to the S-curve fitting procedure, the 50:50% perceptibility threshold was 1.71ΔE00 (r(2)=0.88) and the 50:50% acceptability threshold was 4.00 ΔE00 (r(2)=0.89). Within the limitations of this study, 1.71/4.00 ΔE00 could be used as perceptibility/acceptability thresholds for acrylic denture base resins.
Podder, M S; Majumder, C B
2016-11-05
The optimization of biosorption/bioaccumulation process of both As(III) and As(V) has been investigated by using the biosorbent; biofilm of Corynebacterium glutamicum MTCC 2745 supported on granular activated carbon/MnFe2O4 composite (MGAC). The presence of functional groups on the cell wall surface of the biomass that may interact with the metal ions was proved by FT-IR. To determine the most appropriate correlation for the equilibrium curves employing the procedure of the non-linear regression for curve fitting analysis, isotherm studies were performed for As(III) and As(V) using 30 isotherm models. The pattern of biosorption/bioaccumulation fitted well with Vieth-Sladek isotherm model for As(III) and Brouers-Sotolongo and Fritz-Schlunder-V isotherm models for As(V). The maximum biosorption/bioaccumulation capacity estimated using Langmuir model were 2584.668mg/g for As(III) and 2651.675mg/g for As(V) at 30°C temperature and 220min contact time. The results showed that As(III) and As(V) removal was strongly pH-dependent with an optimum pH value of 7.0. D-R isotherm studies specified that ion exchange might play a prominent role. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, C.; Aldering, G.; Aragon, C.
2015-02-10
We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a givenmore » supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.« less
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Baumgaertner, A.
2016-07-01
We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.
ON THE ROTATION SPEED OF THE MILKY WAY DETERMINED FROM H i EMISSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, M. J.; Dame, T. M.
2016-12-01
The circular rotation speed of the Milky Way at the solar radius, Θ{sub 0}, has been estimated to be 220 km s{sup −1} by fitting the maximum velocity of H i emission as a function of Galactic longitude. This result is in tension with a recent estimate of Θ{sub 0} = 240 km s{sup −1}, based on Very Long Baseline Interferometry (VLBI) parallaxes and proper motions from the BeSSeL and VERA surveys for large numbers of high-mass star-forming regions across the Milky Way. We find that the rotation curve best fitted to the VLBI data is slightly curved, and that this curvaturemore » results in a biased estimate of Θ{sub 0} from the H i data when a flat rotation curve is assumed. This relieves the tension between the methods and favors Θ{sub 0} = 240 km s{sup −1}.« less
Finkel, Deborah; Davis, Deborah Winders; Turkheimer, Eric; Dickens, William T
2015-11-01
Biometric latent growth curve models were applied to data from the LTS in order to replicate and extend Wilson's (Child Dev 54:298-316, 1983) findings. Assessments of cognitive development were available from 8 measurement occasions covering the period 4-15 years for 1032 individuals. Latent growth curve models were fit to percent correct for 7 subscales: information, similarities, arithmetic, vocabulary, comprehension, picture completion, and block design. Models were fit separately to WPPSI (ages 4-6 years) and WISC-R (ages 7-15). Results indicated the expected increases in heritability in younger childhood, and plateaus in heritability as children reached age 10 years. Heritability of change, per se (slope estimates), varied dramatically across domains. Significant genetic influences on slope parameters that were independent of initial levels of performance were found for only information and picture completion subscales. Thus evidence for both genetic continuity and genetic innovation in the development of cognitive abilities in childhood were found.
A geometry package for generation of input data for a three-dimensional potential-flow program
NASA Technical Reports Server (NTRS)
Halsey, N. D.; Hess, J. L.
1978-01-01
The preparation of geometric data for input to three-dimensional potential flow programs was automated and simplified by a geometry package incorporated into the NASA Langley version of the 3-D lifting potential flow program. Input to the computer program for the geometry package consists of a very sparse set of coordinate data, often with an order of magnitude of fewer points than required for the actual potential flow calculations. Isolated components, such as wings, fuselages, etc. are paneled automatically, using one of several possible element distribution algorithms. Curves of intersection between components are calculated, using a hybrid curve-fit/surface-fit approach. Intersecting components are repaneled so that adjacent elements on either side of the intersection curves line up in a satisfactory manner for the potential-flow calculations. Many cases may be run completely (from input, through the geometry package, and through the flow calculations) without interruption. Use of the package significantly reduces the time and expense involved in making three-dimensional potential flow calculations.
Ten years in the library: new data confirm paleontological patterns
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)
1993-01-01
A comparison is made between compilations of times of origination and extinction of fossil marine animal families published in 1982 and 1992. As a result of ten years of library research, half of the information in the compendia has changed: families have been added and deleted, low-resolution stratigraphic data been improved, and intervals of origination and extinction have been altered. Despite these changes, apparent macroevolutionary patterns for the entire marine fauna have remained constant. Diversity curves compiled from the two data bases are very similar, with a goodness-of-fit of 99%; the principal difference is that the 1992 curve averages 13% higher than the older curve. Both numbers and percentages of origination and extinction also match well, with fits ranging from 83% to 95%. All major events of radiation and extinction are identical. Therefore, errors in large paleontological data bases and arbitrariness of included taxa are not necessarily impediments to the analysis of pattern in the fossil record, so long as the data are sufficiently numerous.
Bayesian inference in an item response theory model with a generalized student t link function
NASA Astrophysics Data System (ADS)
Azevedo, Caio L. N.; Migon, Helio S.
2012-10-01
In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.
Bartel, Thomas W.; Yaniv, Simone L.
1997-01-01
The 60 min creep data from National Type Evaluation Procedure (NTEP) tests performed at the National Institute of Standards and Technology (NIST) on 65 load cells have been analyzed in order to compare their creep and creep recovery responses, and to compare the 60 min creep with creep over shorter time periods. To facilitate this comparison the data were fitted to a multiple-term exponential equation, which adequately describes the creep and creep recovery responses of load cells. The use of such a curve fit reduces the effect of the random error in the indicator readings on the calculated values of the load cell creep. Examination of the fitted curves show that the creep recovery responses, after inversion by a change in sign, are generally similar in shape to the creep response, but smaller in magnitude. The average ratio of the absolute value of the maximum creep recovery to the maximum creep is 0.86; however, no reliable correlation between creep and creep recovery can be drawn from the data. The fitted curves were also used to compare the 60 min creep of the NTEP analysis with the 30 min creep and other parameters calculated according to the Organization Internationale de Métrologie Légale (OIML) R 60 analysis. The average ratio of the 30 min creep value to the 60 min value is 0.84. The OIML class C creep tolerance is less than 0.5 of the NTEP tolerance for classes III and III L. PMID:27805151
GRay: A Massively Parallel GPU-based Code for Ray Tracing in Relativistic Spacetimes
NASA Astrophysics Data System (ADS)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
2013-11-01
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.
NASA Technical Reports Server (NTRS)
Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.
1984-01-01
An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.
NASA Astrophysics Data System (ADS)
Mattei, G.; Ahluwalia, A.
2018-04-01
We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue
2017-07-01
Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.
Liquid-vapor relations for the system NaCl-H2O: summary of the P-T- x surface from 300° to 500°C
Bischoff, J.L.; Pitzer, Kenneth S.
1989-01-01
Experimental data on the vapor-liquid equilibrium relations for the system NaCl-H2O were compiled and compared in order to provide an improved estimate of the P-T-x surface between 300° to 500°C, a range for which the system changes from subcritical to critical behavior. Data for the three-phase curve (halite + liquid + vapor) and the NaCl-H2O critical curve were evaluated, and the best fits for these extrema then were used to guide selection of best fit for isothermal plots for the vapor-liquid region in-between. Smoothing was carried out in an iterative procedure by replotting the best-fit data as isobars and then as isopleths, until an internally consistent set of data was obtained. The results are presented in table form that will have application to theoretical modelling and to the understanding of two-phase behavior in saline geothermal systems.
Hannigan, Ailish; Bargary, Norma; Kinsella, Anthony; Clarke, Mary
2017-06-14
Although the relationships between duration of untreated psychosis (DUP) and outcomes are often assumed to be linear, few studies have explored the functional form of these relationships. The aim of this study is to demonstrate the potential of recent advances in curve fitting approaches (splines) to explore the form of the relationship between DUP and global assessment of functioning (GAF). Curve fitting approaches were used in models to predict change in GAF at long-term follow-up using DUP for a sample of 83 individuals with schizophrenia. The form of the relationship between DUP and GAF was non-linear. Accounting for non-linearity increased the percentage of variance in GAF explained by the model, resulting in better prediction and understanding of the relationship. The relationship between DUP and outcomes may be complex and model fit may be improved by accounting for the form of the relationship. This should be routinely assessed and new statistical approaches for non-linear relationships exploited, if appropriate. © 2017 John Wiley & Sons Australia, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheehan, Daniel M.
2006-01-15
We tested the hypothesis that no threshold exists when estradiol acts through the same mechanism as an active endogenous estrogen. A Michaelis-Menten (MM) equation accounting for response saturation, background effects, and endogenous estrogen level fit a turtle sex-reversal data set with no threshold and estimated the endogenous dose. Additionally, 31 diverse literature dose-response data sets were analyzed by adding a term for nonhormonal background; good fits were obtained but endogenous dose estimations were not significant due to low resolving power. No thresholds were observed. Data sets were plotted using a normalized MM equation; all 178 data points were accommodated onmore » a single graph. Response rates from {approx}1% to >95% were well fit. The findings contradict the threshold assumption and low-dose safety. Calculating risk and assuming additivity of effects from multiple chemicals acting through the same mechanism rather than assuming a safe dose for nonthresholded curves is appropriate.« less
Maier, Jonathan G; Piosczyk, Hannah; Holz, Johannes; Landmann, Nina; Deschler, Christoph; Frase, Lukas; Kuhn, Marion; Klöppel, Stefan; Spiegelhalder, Kai; Sterr, Annette; Riemann, Dieter; Feige, Bernd; Voderholzer, Ulrich; Nissen, Christoph
2017-11-01
Sleep modulates motor learning, but its detailed impact on performance curves remains to be fully characterized. This study aimed to further determine the impact of brief daytime periods of NREM sleep on 'offline' (task discontinuation after initial training) and 'on-task' (performance within the test session) changes in motor skill performance (finger tapping task). In a mixed design (combined parallel group and repeated measures) sleep laboratory study (n=17 'active' wake vs. sleep, n=19 'passive' wake vs. sleep), performance curves were assessed prior to and after a 90min period containing either sleep, active or passive wakefulness. We observed a highly significant, but state- (that is, sleep/wake)-independent early offline gain and improved on-task performance after sleep in comparison to wakefulness. Exploratory curve fitting suggested that the observed sleep effect most likely emerged from an interaction of training-induced improvement and detrimental 'time-on-task' processes, such as fatigue. Our results indicate that brief periods of NREM sleep do not promote early offline gains but subsequent on-task performance in motor skill learning. Copyright © 2017 Elsevier Inc. All rights reserved.
Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S
2010-07-12
A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. Copyright 2010 Elsevier B.V. All rights reserved.
Soil Water Characteristics of Cores from Low- and High-Centered Polygons, Barrow, Alaska, 2012
Graham, David; Moon, Ji-Won
2016-08-22
This dataset includes soil water characteristic curves for soil and permafrost in two representative frozen cores collected from a high-center polygon (HCP) and a low-center polygon (LCP) from the Barrow Environmental Observatory. Data include soil water content and soil water potential measured using the simple evaporation method for hydrological and biogeochemical simulations and experimental data analysis. Data can be used to generate a soil moisture characteristic curve, which can be fit to a variety of hydrological functions to infer critical parameters for soil physics. Considering the measured the soil water properties, the van Genuchten model predicted well the HCP, in contrast, the Kosugi model well fitted LCP which had more saturated condition.
1949-01-01
de - cay curves. The fit depended on the points chosen for the determination of K and L and by a suitable choice a -12- reasonably good fit could be...the previous test the crater de - cal curves indicate a mixture of Na24 and fission products. Ot)ier active materials may be present for observed...IVT N N 4-1 ui ui 0 i ’ T ; -mW 1 310 X--SH R TI ’ De A?--ŘZ (u E I EFT HAND SCA tf; t: iliT, M160 20 40 60 so 100 120 1*0 0 100 200 300 400 500 600
Tensile stress-strain behavior of graphite/epoxy laminates
NASA Technical Reports Server (NTRS)
Garber, D. P.
1982-01-01
The tensile stress-strain behavior of a variety of graphite/epoxy laminates was examined. Longitudinal and transverse specimens from eleven different layups were monotonically loaded in tension to failure. Ultimate strength, ultimate strain, and strss-strain curves wee obtained from four replicate tests in each case. Polynominal equations were fitted by the method of least squares to the stress-strain data to determine average curves. Values of Young's modulus and Poisson's ratio, derived from polynomial coefficients, were compared with laminate analysis results. While the polynomials appeared to accurately fit the stress-strain data in most cases, the use of polynomial coefficients to calculate elastic moduli appeared to be of questionable value in cases involving sharp changes in the slope of the stress-strain data or extensive scatter.
Species area relationships in mediterranean-climate plant communities
Keeley, Jon E.; Fotheringham, C.J.
2003-01-01
Aim To determine the best-fit model of species–area relationships for Mediterranean-type plant communities and evaluate how community structure affects these species–area models.Location Data were collected from California shrublands and woodlands and compared with literature reports for other Mediterranean-climate regions.Methods The number of species was recorded from 1, 100 and 1000 m2 nested plots. Best fit to the power model or exponential model was determined by comparing adjusted r2 values from the least squares regression, pattern of residuals, homoscedasticity across scales, and semi-log slopes at 1–100 m2 and 100–1000 m2. Dominance–diversity curves were tested for fit to the lognormal model, MacArthur's broken stick model, and the geometric and harmonic series.Results Early successional Western Australia and California shrublands represented the extremes and provide an interesting contrast as the exponential model was the best fit for the former, and the power model for the latter, despite similar total species richness. We hypothesize that structural differences in these communities account for the different species–area curves and are tied to patterns of dominance, equitability and life form distribution. Dominance–diversity relationships for Western Australian heathlands exhibited a close fit to MacArthur's broken stick model, indicating more equitable distribution of species. In contrast, Californian shrublands, both postfire and mature stands, were best fit by the geometric model indicating strong dominance and many minor subordinate species. These regions differ in life form distribution, with annuals being a major component of diversity in early successional Californian shrublands although they are largely lacking in mature stands. Both young and old Australian heathlands are dominated by perennials, and annuals are largely absent. Inherent in all of these ecosystems is cyclical disequilibrium caused by periodic fires. The potential for community reassembly is greater in Californian shrublands where only a quarter of the flora resprout, whereas three quarters resprout in Australian heathlands.Other Californian vegetation types sampled include coniferous forests, oak savannas and desert scrub, and demonstrate that different community structures may lead to a similar species–area relationship. Dominance–diversity relationships for coniferous forests closely follow a geometric model whereas associated oak savannas show a close fit to the lognormal model. However, for both communities, species–area curves fit a power model. The primary driver appears to be the presence of annuals. Desert scrub communities illustrate dramatic changes in both species diversity and dominance–diversity relationships in high and low rainfall years, because of the disappearance of annuals in drought years.Main conclusions Species–area curves for immature shrublands in California and the majority of Mediterranean plant communities fit a power function model. Exceptions that fit the exponential model are not because of sampling error or scaling effects, rather structural differences in these communities provide plausible explanations. The exponential species–area model may arise in more than one way. In the highly diverse Australian heathlands it results from a rapid increase in species richness at small scales. In mature California shrublands it results from very depauperate richness at the community scale. In both instances the exponential model is tied to a preponderance of perennials and paucity of annuals. For communities fit by a power model, coefficients z and log c exhibit a number of significant correlations with other diversity parameters, suggesting that they have some predictive value in ecological communities.
Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V
2007-02-01
Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.
Estimating non-isothermal bacterial growth in foods from isothermal experimental data.
Corradini, M G; Peleg, M
2005-01-01
To develop a mathematical method to estimate non-isothermal microbial growth curves in foods from experiments performed under isothermal conditions and demonstrate the method's applicability with published growth data. Published isothermal growth curves of Pseudomonas spp. in refrigerated fish at 0-8 degrees C and Escherichia coli 1952 in a nutritional broth at 27.6-36 degrees C were fitted with two different three-parameter 'primary models' and the temperature dependence of their parameters was fitted by ad hoc empirical 'secondary models'. These were used to generate non-isothermal growth curves by solving, numerically, a differential equation derived on the premise that the momentary non-isothermal growth rate is the isothermal rate at the momentary temperature, at a time that corresponds to the momentary growth level of the population. The predicted non-isothermal growth curves were in agreement with the reported experimental ones and, as expected, the quality of the predictions did not depend on the 'primary model' chosen for the calculation. A common type of sigmoid growth curve can be adequately described by three-parameter 'primary models'. At least in the two systems examined, these could be used to predict growth patterns under a variety of continuous and discontinuous non-isothermal temperature profiles. The described mathematical method whenever validated experimentally will enable the simulation of the microbial quality of stored and transported foods under a large variety of existing or contemplated commercial temperature histories.
NASA Astrophysics Data System (ADS)
Lee, Soojin; Cho, Woon Jo; Kim, Yang Do; Kim, Eun Kyu; Park, Jae Gwan
2005-07-01
White-light-emitting Si nanoparticles were prepared from the sodium silicide (NaSi) precursor. The photoluminescence of colloidal Si nanoparticles has been fitted by effective mass approximation (EMA). We analyzed the correlation between experimental photoluminescence and simulated fitting curves. Both the mean diameter and the size dispersion of the white-light-emitting Si nanoparticles were estimated.
Note: Index of refraction measurement using the Fresnel equations.
McClymer, J P
2014-08-01
The real part of the refractive index is measured from 1.30 to above 3.00 without the use of index matching fluids. This approach expands upon the Brewster angle technique as both S and P polarized lights are used and the full Fresnel equations fitted to the data to extract the index of refraction using nonlinear curve fitting.
[Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].
Zhu, Chun; Zhang, Xu
2010-10-01
Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.
Winterstein, Thomas A.
2002-01-01
Hantush and Theis methods type curves were fitted to the measured drawdown and recovery curves in the observation well. The results of matching the type curves to the measured data indicate that leakage is negligible from the overlying Eau Claire confining unit into the Mt. Simon aquifer. The transmissivity and storage coeffi-cients for the Mt. Simon aquifer, determined by both methods, are 3, 000 ft2/d and 3 x 10-4, respectively. The average hydraulic conductivity, assuming an aquifer thickness of 233 ft, is 10 ft/d.
NASA Astrophysics Data System (ADS)
Kazakis, Nikolaos A.
2018-01-01
The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.
Mohamed, Moumouni Guero; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael
2017-01-01
With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED’s optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life. PMID:28773176
Fan, Jiajie; Mohamed, Moumouni Guero; Qian, Cheng; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael
2017-07-18
With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED's optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life.
NASA Astrophysics Data System (ADS)
Hayek, W.; Sing, D.; Pont, F.; Asplund, M.
2012-03-01
We compare limb darkening laws derived from 3D hydrodynamical model atmospheres and 1D hydrostatic MARCS models for the host stars of two well-studied transiting exoplanet systems, the late-type dwarfs HD 209458 and HD 189733. The surface brightness distribution of the stellar disks is calculated for a wide spectral range using 3D LTE spectrum formation and opacity sampling⋆. We test our theoretical predictions using least-squares fits of model light curves to wavelength-integrated primary eclipses that were observed with the Hubble Space Telescope (HST). The limb darkening law derived from the 3D model of HD 209458 in the spectral region between 2900 Å and 5700 Å produces significantly better fits to the HST data, removing systematic residuals that were previously observed for model light curves based on 1D limb darkening predictions. This difference arises mainly from the shallower mean temperature structure of the 3D model, which is a consequence of the explicit simulation of stellar surface granulation where 1D models need to rely on simplified recipes. In the case of HD 189733, the model atmospheres produce practically equivalent limb darkening curves between 2900 Å and 5700 Å, partly due to obstruction by spectral lines, and the data are not sufficient to distinguish between the light curves. We also analyze HST observations between 5350 Å and 10 500 Å for this star; the 3D model leads to a better fit compared to 1D limb darkening predictions. The significant improvement of fit quality for the HD 209458 system demonstrates the higher degree of realism of 3D hydrodynamical models and the importance of surface granulation for the formation of the atmospheric radiation field of late-type stars. This result agrees well with recent investigations of limb darkening in the solar continuum and other observational tests of the 3D models. The case of HD 189733 is no contradiction as the model light curves are less sensitive to the temperature stratification of the stellar atmosphere and the observed data in the 2900-5700 Å region are not sufficient to distinguish more clearly between the 3D and 1D limb darkening predictions. Full theoretical spectra for both stars are available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/539/A102, as well as at www.astro.ex.ac.uk/people/sing.
Ingerle, D.; Meirer, F.; Pepponi, G.; Demenev, E.; Giubertoni, D.; Wobrauschek, P.; Streli, C.
2014-01-01
The continuous downscaling of the process size for semiconductor devices pushes the junction depths and consequentially the implantation depths to the top few nanometers of the Si substrate. This motivates the need for sensitive methods capable of analyzing dopant distribution, total dose and possible impurities. X-ray techniques utilizing the external reflection of X-rays are very surface sensitive, hence providing a non-destructive tool for process analysis and control. X-ray reflectometry (XRR) is an established technique for the characterization of single- and multi-layered thin film structures with layer thicknesses in the nanometer range. XRR spectra are acquired by varying the incident angle in the grazing incidence regime while measuring the specular reflected X-ray beam. The shape of the resulting angle-dependent curve is correlated to changes of the electron density in the sample, but does not provide direct information on the presence or distribution of chemical elements in the sample. Grazing Incidence XRF (GIXRF) measures the X-ray fluorescence induced by an X-ray beam incident under grazing angles. The resulting angle dependent intensity curves are correlated to the depth distribution and mass density of the elements in the sample. GIXRF provides information on contaminations, total implanted dose and to some extent on the depth of the dopant distribution, but is ambiguous with regard to the exact distribution function. Both techniques use similar measurement procedures and data evaluation strategies, i.e. optimization of a sample model by fitting measured and calculated angle curves. Moreover, the applied sample models can be derived from the same physical properties, like atomic scattering/form factors and elemental concentrations; a simultaneous analysis is therefore a straightforward approach. This combined analysis in turn reduces the uncertainties of the individual techniques, allowing a determination of dose and depth profile of the implanted elements with drastically increased confidence level. Silicon wafers implanted with Arsenic at different implantation energies were measured by XRR and GIXRF using a combined, simultaneous measurement and data evaluation procedure. The data were processed using a self-developed software package (JGIXA), designed for simultaneous fitting of GIXRF and XRR data. The results were compared with depth profiles obtained by Secondary Ion Mass Spectrometry (SIMS). PMID:25202165
A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction
NASA Astrophysics Data System (ADS)
Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.
2017-03-01
There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.
2012-08-01
This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.
Extracting information from AGN variability
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.; Vogeley, Michael S.; Richards, Gordon T.
2017-09-01
Active galactic nuclei (AGNs) exhibit rapid, high-amplitude stochastic flux variations across the entire electromagnetic spectrum on time-scales ranging from hours to years. The cause of this variability is poorly understood. We present a Green's function-based method for using variability to (1) measure the time-scales on which flux perturbations evolve and (2) characterize the driving flux perturbations. We model the observed light curve of an AGN as a linear differential equation driven by stochastic impulses. We analyse the light curve of the Kepler AGN Zw 229-15 and find that the observed variability behaviour can be modelled as a damped harmonic oscillator perturbed by a coloured noise process. The model power spectrum turns over on time-scale 385 d. On shorter time-scales, the log-power-spectrum slope varies between 2 and 4, explaining the behaviour noted by previous studies. We recover and identify both the 5.6 and 67 d time-scales reported by previous work using the Green's function of the Continuous-time AutoRegressive Moving Average equation rather than by directly fitting the power spectrum of the light curve. These are the time-scales on which flux perturbations grow, and on which flux perturbations decay back to the steady-state flux level, respectively. We make the software package kālī used to study light curves using our method available to the community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Martin; /SLAC
2010-12-16
The study of the power density spectrum (PDS) of fluctuations in the X-ray flux from active galactic nuclei (AGN) complements spectral studies in giving us a view into the processes operating in accreting compact objects. An important line of investigation is the comparison of the PDS from AGN with those from galactic black hole binaries; a related area of focus is the scaling relation between time scales for the variability and the black hole mass. The PDS of AGN is traditionally modeled using segments of power laws joined together at so-called break frequencies; associations of the break time scales, i.e.,more » the inverses of the break frequencies, with time scales of physical processes thought to operate in these sources are then sought. I analyze the Method of Light Curve Simulations that is commonly used to characterize the PDS in AGN with a view to making the method as sensitive as possible to the shape of the PDS. I identify several weaknesses in the current implementation of the method and propose alternatives that can substitute for some of the key steps in the method. I focus on the complications introduced by uneven sampling in the light curve, the development of a fit statistic that is better matched to the distributions of power in the PDS, and the statistical evaluation of the fit between the observed data and the model for the PDS. Using archival data on one AGN, NGC 3516, I validate my changes against previously reported results. I also report new results on the PDS in NGC 4945, a Seyfert 2 galaxy with a well-determined black hole mass. This source provides an opportunity to investigate whether the PDS of Seyfert 1 and Seyfert 2 galaxies differ. It is also an attractive object for placement on the black hole mass-break time scale relation. Unfortunately, with the available data on NGC 4945, significant uncertainties on the break frequency in its PDS remain.« less
NASA Astrophysics Data System (ADS)
Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki
2002-05-01
To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.
Bancalari, Elena; Bernini, Valentina; Bottari, Benedetta; Neviani, Erasmo; Gatti, Monica
2016-01-01
Impedance microbiology is a method that enables tracing microbial growth by measuring the change in the electrical conductivity. Different systems, able to perform this measurement, are available in commerce and are commonly used for food control analysis by mean of measuring a point of the impedance curve, defined "time of detection." With this work we wanted to find an objective way to interpret the metabolic significance of impedance curves and propose it as a valid approach to evaluate the potential acidifying performances of starter lactic acid bacteria to be employed in milk transformation. To do this it was firstly investigated the possibility to use the Gompertz equation to describe the data coming from the impedance curve obtained by mean of BacTrac 4300®. Lag time (λ), maximum specific M% rate (μmax), and maximum value of M% (Yend) have been calculated and, given the similarity of the impedance fitted curve to the bacterial growth curve, their meaning has been interpreted. Potential acidifying performances of eighty strains belonging to Lactobacillus helveticus, Lactobacillus delbrueckii subsp. bulgaricus, Lactococcus lactis , and Streptococcus thermophilus species have been evaluated by using the kinetics parameters, obtained from Excel add-in DMFit version 2.1. The novelty and importance of our findings, obtained by means of BacTrac 4300®, is that they can also be applied to data obtained from other devices. Moreover, the meaning of λ, μmax, and Yend that we have extrapolated from Modified Gompertz equation and discussed for lactic acid bacteria in milk, can be exploited also to other food environment or other bacteria, assuming that they can give a curve and that curve is properly fitted with Gompertz equation.
Bancalari, Elena; Bernini, Valentina; Bottari, Benedetta; Neviani, Erasmo; Gatti, Monica
2016-01-01
Impedance microbiology is a method that enables tracing microbial growth by measuring the change in the electrical conductivity. Different systems, able to perform this measurement, are available in commerce and are commonly used for food control analysis by mean of measuring a point of the impedance curve, defined “time of detection.” With this work we wanted to find an objective way to interpret the metabolic significance of impedance curves and propose it as a valid approach to evaluate the potential acidifying performances of starter lactic acid bacteria to be employed in milk transformation. To do this it was firstly investigated the possibility to use the Gompertz equation to describe the data coming from the impedance curve obtained by mean of BacTrac 4300®. Lag time (λ), maximum specific M% rate (μmax), and maximum value of M% (Yend) have been calculated and, given the similarity of the impedance fitted curve to the bacterial growth curve, their meaning has been interpreted. Potential acidifying performances of eighty strains belonging to Lactobacillus helveticus, Lactobacillus delbrueckii subsp. bulgaricus, Lactococcus lactis, and Streptococcus thermophilus species have been evaluated by using the kinetics parameters, obtained from Excel add-in DMFit version 2.1. The novelty and importance of our findings, obtained by means of BacTrac 4300®, is that they can also be applied to data obtained from other devices. Moreover, the meaning of λ, μmax, and Yend that we have extrapolated from Modified Gompertz equation and discussed for lactic acid bacteria in milk, can be exploited also to other food environment or other bacteria, assuming that they can give a curve and that curve is properly fitted with Gompertz equation. PMID:27799925
Long-term predictive capability of erosion models
NASA Technical Reports Server (NTRS)
Veerabhadra, P.; Buckley, D. H.
1983-01-01
A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.