Sample records for calibration curve nonlinearity

  1. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  2. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  3. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  4. Apollo 16/AS-511/LM-11 operational calibration curves. Volume 1: Calibration curves for command service module CSM 113

    NASA Technical Reports Server (NTRS)

    Demoss, J. F. (Compiler)

    1971-01-01

    Calibration curves for the Apollo 16 command service module pulse code modulation downlink and onboard display are presented. Subjects discussed are: (1) measurement calibration curve format, (2) measurement identification, (3) multi-mode calibration data summary, (4) pulse code modulation bilevel events listing, and (5) calibration curves for instrumentation downlink and meter link.

  5. A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems

    PubMed Central

    Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.

    2013-01-01

    Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415

  6. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  7. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  8. A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors

    PubMed Central

    Liu, Shibin

    2018-01-01

    Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448

  9. Nonlinear normal modes modal interactions and isolated resonance curves

    DOE PAGES

    Kuether, Robert J.; Renson, L.; Detroux, T.; ...

    2015-05-21

    The objective of the present study is to explore the connection between the nonlinear normal modes of an undamped and unforced nonlinear system and the isolated resonance curves that may appear in the damped response of the forced system. To this end, an energy balance technique is used to predict the amplitude of the harmonic forcing that is necessary to excite a specific nonlinear normal mode. A cantilever beam with a nonlinear spring at its tip serves to illustrate the developments. Furthermore, the practical implications of isolated resonance curves are also discussed by computing the beam response to sine sweepmore » excitations of increasing amplitudes.« less

  10. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  11. Refinement of moisture calibration curves for nuclear gage : interim report no. 1.

    DOT National Transportation Integrated Search

    1972-01-01

    This study was initiated to determine the correct moisture calibration curves for different nuclear gages. It was found that the Troxler Model 227 had a linear response between count ratio and moisture content. Also, the two calibration curves for th...

  12. A new form of the calibration curve in radiochromic dosimetry. Properties and results.

    PubMed

    Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio

    2016-07-01

    This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer-Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time

  13. Wavelength selection-based nonlinear calibration for transcutaneous blood glucose sensing using Raman spectroscopy

    PubMed Central

    Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.

    2011-01-01

    While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336

  14. A new form of the calibration curve in radiochromic dosimetry. Properties and results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; B

    Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve hasmore » been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films

  15. Nonlinear Growth Curves in Developmental Research

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki

    2011-01-01

    Developmentalists are often interested in understanding change processes and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and asymptotic levels can be estimated. A variety of growth models are described beginning with the linear growth model and moving to nonlinear models of varying complexity. A detailed discussion of nonlinear models is provided, highlighting the added insights into complex developmental processes associated with their use. A collection of growth models are fit to repeated measures of height from participants of the Berkeley Growth and Guidance Studies from early childhood through adulthood. PMID:21824131

  16. Residual mode correction in calibrating nonlinear damper for vibration control of flexible structures

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Chen, Lin

    2017-10-01

    Residual mode correction is found crucial in calibrating linear resonant absorbers for flexible structures. The classic modal representation augmented with stiffness and inertia correction terms accounting for non-resonant modes improves the calibration accuracy and meanwhile avoids complex modal analysis of the full system. This paper explores the augmented modal representation in calibrating control devices with nonlinearity, by studying a taut cable attached with a general viscous damper and its Equivalent Dynamic Systems (EDSs), i.e. the augmented modal representations connected to the same damper. As nonlinearity is concerned, Frequency Response Functions (FRFs) of the EDSs are investigated in detail for parameter calibration, using the harmonic balance method in combination with numerical continuation. The FRFs of the EDSs and corresponding calibration results are then compared with those of the full system documented in the literature for varied structural modes, damper locations and nonlinearity. General agreement is found and in particular the EDS with both stiffness and inertia corrections (quasi-dynamic correction) performs best among available approximate methods. This indicates that the augmented modal representation although derived from linear cases is applicable to a relatively wide range of damper nonlinearity. Calibration of nonlinear devices by this means still requires numerical analysis while the efficiency is largely improved owing to the system order reduction.

  17. Nonlinear Growth Curves in Developmental Research

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki

    2011-01-01

    Developmentalists are often interested in understanding change processes, and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and…

  18. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  19. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  20. Influence of Ultrasonic Nonlinear Propagation on Hydrophone Calibration Using Two-Transducer Reciprocity Method

    NASA Astrophysics Data System (ADS)

    Yoshioka, Masahiro; Sato, Sojun; Kikuchi, Tsuneo; Matsuda, Yoichi

    2006-05-01

    In this study, the influence of ultrasonic nonlinear propagation on hydrophone calibration by the two-transducer reciprocity method is investigated quantitatively using the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation. It is proposed that the correction for the diffraction and attenuation of ultrasonic waves used in two-transducer reciprocity calibration can be derived using the KZK equation to remove the influence of nonlinear propagation. The validity of the correction is confirmed by comparing the sensitivities calibrated by the two-transducer reciprocity method and laser interferometry.

  1. Curved Displacement Transfer Functions for Geometric Nonlinear Large Deformation Structure Shape Predictions

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Fleischer, Van Tran; Lung, Shun-Fat

    2017-01-01

    For shape predictions of structures under large geometrically nonlinear deformations, Curved Displacement Transfer Functions were formulated based on a curved displacement, traced by a material point from the undeformed position to deformed position. The embedded beam (depth-wise cross section of a structure along a surface strain-sensing line) was discretized into multiple small domains, with domain junctures matching the strain-sensing stations. Thus, the surface strain distribution could be described with a piecewise linear or a piecewise nonlinear function. The discretization approach enabled piecewise integrations of the embedded-beam curvature equations to yield the Curved Displacement Transfer Functions, expressed in terms of embedded beam geometrical parameters and surface strains. By entering the surface strain data into the Displacement Transfer Functions, deflections along each embedded beam can be calculated at multiple points for mapping the overall structural deformed shapes. Finite-element linear and nonlinear analyses of a tapered cantilever tubular beam were performed to generate linear and nonlinear surface strains and the associated deflections to be used for validation. The shape prediction accuracies were then determined by comparing the theoretical deflections with the finiteelement- generated deflections. The results show that the newly developed Curved Displacement Transfer Functions are very accurate for shape predictions of structures under large geometrically nonlinear deformations.

  2. Nonlinear dynamical modes of climate variability: from curves to manifolds

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander

    2016-04-01

    The necessity of efficient dimensionality reduction methods capturing dynamical properties of the system from observed data is evident. Recent study shows that nonlinear dynamical mode (NDM) expansion is able to solve this problem and provide adequate phase variables in climate data analysis [1]. A single NDM is logical extension of linear spatio-temporal structure (like empirical orthogonal function pattern): it is constructed as nonlinear transformation of hidden scalar time series to the space of observed variables, i. e. projection of observed dataset onto a nonlinear curve. Both the hidden time series and the parameters of the curve are learned simultaneously using Bayesian approach. The only prior information about the hidden signal is the assumption of its smoothness. The optimal nonlinearity degree and smoothness are found using Bayesian evidence technique. In this work we do further extension and look for vector hidden signals instead of scalar with the same smoothness restriction. As a result we resolve multidimensional manifolds instead of sum of curves. The dimension of the hidden manifold is optimized using also Bayesian evidence. The efficiency of the extension is demonstrated on model examples. Results of application to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510

  3. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  4. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  5. Geometrodynamics: the nonlinear dynamics of curved spacetime

    NASA Astrophysics Data System (ADS)

    Scheel, M. A.; Thorne, K. S.

    2014-04-01

    We review discoveries in the nonlinear dynamics of curved spacetime, largely made possible by numerical solutions of Einstein's equations. We discuss critical phenomena and self-similarity in gravitational collapse, the behavior of spacetime curvature near singularities, the instability of black strings in five spacetime dimensions, and the collision of four-dimensional black holes. We also discuss the prospects for further discoveries in geometrodynamics via observations of gravitational waves.

  6. Bayesian inference of Calibration curves: application to archaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lanos, P.

    2003-04-01

    The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.

  7. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  8. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  9. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  10. LDV measurement of small nonlinearities in flat and curved membranes. A model for eardrum nonlinear acoustic behaviour

    NASA Astrophysics Data System (ADS)

    Kilian, Gladiné; Pieter, Muyshondt; Joris, Dirckx

    2016-06-01

    Laser Doppler Vibrometry is an intrinsic highly linear measurement technique which makes it a great tool to measure extremely small nonlinearities in the vibration response of a system. Although the measurement technique is highly linear, other components in the experimental setup may introduce nonlinearities. An important source of artificially introduced nonlinearities is the speaker, which generates the stimulus. In this work, two correction methods to remove the effects of stimulus nonlinearity are investigated. Both correction methods were found to give similar results but have different pros and cons. The aim of this work is to investigate the importance of the conical shape of the eardrum as a source of nonlinearity in hearing. We present measurements on flat and indented membranes. The data shows that the curved membrane exhibit slightly higher levels of nonlinearity compared to the flat membrane.

  11. Isogeometric analysis of free-form Timoshenko curved beams including the nonlinear effects of large deformations

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Farhad; Hashemian, Ali; Moetakef-Imani, Behnam; Hadidimoud, Saied

    2018-03-01

    In the present paper, the isogeometric analysis (IGA) of free-form planar curved beams is formulated based on the nonlinear Timoshenko beam theory to investigate the large deformation of beams with variable curvature. Based on the isoparametric concept, the shape functions of the field variables (displacement and rotation) in a finite element analysis are considered to be the same as the non-uniform rational basis spline (NURBS) basis functions defining the geometry. The validity of the presented formulation is tested in five case studies covering a wide range of engineering curved structures including from straight and constant curvature to variable curvature beams. The nonlinear deformation results obtained by the presented method are compared to well-established benchmark examples and also compared to the results of linear and nonlinear finite element analyses. As the nonlinear load-deflection behavior of Timoshenko beams is the main topic of this article, the results strongly show the applicability of the IGA method to the large deformation analysis of free-form curved beams. Finally, it is interesting to notice that, until very recently, the large deformations analysis of free-form Timoshenko curved beams has not been considered in IGA by researchers.

  12. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  13. Classical Black Holes: The Nonlinear Dynamics of Curved Spacetime

    NASA Astrophysics Data System (ADS)

    Thorne, Kip S.

    2012-08-01

    Numerical simulations have revealed two types of physical structures, made from curved spacetime, that are attached to black holes: tendexes, which stretch or squeeze anything they encounter, and vortexes, which twist adjacent inertial frames relative to each other. When black holes collide, their tendexes and vortexes interact and oscillate (a form of nonlinear dynamics of curved spacetime). These oscillations generate gravitational waves, which can give kicks up to 4000 kilometers per second to the merged black hole. The gravitational waves encode details of the spacetime dynamics and will soon be observed and studied by the Laser Interferometer Gravitational Wave Observatory and its international partners.

  14. Classical black holes: the nonlinear dynamics of curved spacetime.

    PubMed

    Thorne, Kip S

    2012-08-03

    Numerical simulations have revealed two types of physical structures, made from curved spacetime, that are attached to black holes: tendexes, which stretch or squeeze anything they encounter, and vortexes, which twist adjacent inertial frames relative to each other. When black holes collide, their tendexes and vortexes interact and oscillate (a form of nonlinear dynamics of curved spacetime). These oscillations generate gravitational waves, which can give kicks up to 4000 kilometers per second to the merged black hole. The gravitational waves encode details of the spacetime dynamics and will soon be observed and studied by the Laser Interferometer Gravitational Wave Observatory and its international partners.

  15. A weakly nonlinear theory for wave-vortex interactions in curved channel flow

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.; Erlebacher, Gordon; Zang, Thomas A.

    1992-01-01

    A weakly nonlinear theory is developed to study the interaction of Tollmien-Schlichting (TS) waves and Dean vortices in curved channel flow. The predictions obtained from the theory agree well with results obtained from direct numerical simulations of curved channel flow, especially for low amplitude disturbances. Some discrepancies in the results of a previous theory with direct numerical simulations are resolved.

  16. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  17. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  18. Construction of dose response calibration curves for dicentrics and micronuclei for X radiation in a Serbian population.

    PubMed

    Pajic, J; Rakic, B; Jovicic, D; Milovanovic, A

    2014-10-01

    Biological dosimetry using chromosome damage biomarkers is a valuable dose assessment method in cases of radiation overexposure with or without physical dosimetry data. In order to estimate dose by biodosimetry, any biological dosimetry service have to have its own dose response calibration curve. This paper reveals the results obtained after irradiation of blood samples from fourteen healthy male and female volunteers in order to establish biodosimetry in Serbia and produce dose response calibration curves for dicentrics and micronuclei. Taking into account pooled data from all the donors, the resultant fitted curve for dicentrics is: Ydic=0.0009 (±0.0003)+0.0421 (±0.0042)×D+0.0602 (±0.0022)×D(2); and for micronuclei: Ymn=0.0104 (±0.0015)+0.0824 (±0.0050)×D+0.0189 (±0.0017)×D(2). Following establishment of the dose response curve, a validation experiment was carried out with four blood samples. Applied and estimated doses were in good agreement. On this basis, the results reported here give us confidence to apply both calibration curves for future biological dosimetry requirements in Serbia. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bengtsson, J.

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics frommore » Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et

  20. Imperfection Sensitivity of Nonlinear Vibration of Curved Single-Walled Carbon Nanotubes Based on Nonlocal Timoshenko Beam Theory

    PubMed Central

    Eshraghi, Iman; Jalali, Seyed K.; Pugno, Nicola Maria

    2016-01-01

    Imperfection sensitivity of large amplitude vibration of curved single-walled carbon nanotubes (SWCNTs) is considered in this study. The SWCNT is modeled as a Timoshenko nano-beam and its curved shape is included as an initial geometric imperfection term in the displacement field. Geometric nonlinearities of von Kármán type and nonlocal elasticity theory of Eringen are employed to derive governing equations of motion. Spatial discretization of governing equations and associated boundary conditions is performed using differential quadrature (DQ) method and the corresponding nonlinear eigenvalue problem is iteratively solved. Effects of amplitude and location of the geometric imperfection, and the nonlocal small-scale parameter on the nonlinear frequency for various boundary conditions are investigated. The results show that the geometric imperfection and non-locality play a significant role in the nonlinear vibration characteristics of curved SWCNTs. PMID:28773911

  1. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2011-07-01

    The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied

  2. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2010-12-01

    The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of

  3. INFLUENCE OF IRON CHELATION ON R1 AND R2 CALIBRATION CURVES IN GERBIL LIVER AND HEART

    PubMed Central

    Wood, John C.; Aguilar, Michelle; Otto-Duessel, Maya; Nick, Hanspeter; Nelson, Marvin D.; Moats, Rex

    2008-01-01

    MRI is gaining increasing importance for the noninvasive quantification of organ iron burden. Since transverse relaxation rates depend on iron distribution as well as iron concentration, physiologic and pharmacologic processes that alter iron distribution could change MRI calibration curves. This paper compares the effect of three iron chelators, deferoxamine, deferiprone, and deferasirox on R1 and R2 calibration curves according to two iron loading and chelation strategies. 33 Mongolian gerbils underwent iron loading (iron dextran 500 mg/kg/wk) for 3 weeks followed by 4 weeks of chelation. An additional 56 animals received less aggressive loading (200 mg/kg/week) for 10 weeks, followed by 12 weeks of chelation. R1 and R2 calibration curves were compared to results from 23 iron-loaded animals that had not received chelation. Acute iron loading and chelation biased R1 and R2 from the unchelated reference calibration curves but chelator-specific changes were not observed, suggesting physiologic rather than pharmacologic differences in iron distribution. Long term chelation deferiprone treatment increased liver R1 50% (p<0.01), while long term deferasirox lowered liver R2 30.9% (p<0.0001). The relationship between R1 and R2 and organ iron concentration may depend upon the acuity of iron loading and unloading as well as the iron chelator administered. PMID:18581418

  4. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    DOE PAGES

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...

    2018-02-13

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less

  5. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.

    2018-03-01

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.

  6. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less

  7. LAMOST Spectrograph Response Curves: Stability and Application to Flux Calibration

    NASA Astrophysics Data System (ADS)

    Du, Bing; Luo, A.-Li; Kong, Xiao; Zhang, Jian-Nan; Guo, Yan-Xin; Cook, Neil James; Hou, Wen; Yang, Hai-Feng; Li, Yin-Bi; Song, Yi-Han; Chen, Jian-Jun; Zuo, Fang; Wu, Ke-Fei; Wang, Meng-Xin; Wu, Yue; Wang, You-Fen; Zhao, Yong-Heng

    2016-12-01

    The task of flux calibration for Large sky Area Multi-Object Spectroscopic Telescope (LAMOST) spectra is difficult due to many factors, such as the lack of standard stars, flat-fielding for large field of view, and variation of reddening between different stars, especially at low Galactic latitudes. Poor selection, bad spectral quality, or extinction uncertainty of standard stars not only might induce errors to the calculated spectral response curve (SRC) but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude | b| ≥slant 60^\\circ and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee that the selected stars had been observed by each fiber, we selected 37,931 high-quality exposures of 29,000 stars from LAMOST DR2, and more than seven exposures for each fiber. We calculated the SRCs for each fiber for each exposure and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable, with statistical errors ≤10%. From the comparison between each ASPSRC and the SRCs for the same spectrograph obtained by the 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra that were abandoned by the LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with the Sloan Digital Sky Survey (SDSS), the relative flux differences between SDSS spectra and LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration.

  8. Nonlinear Gompertz Curve Models of Achievement Gaps in Mathematics and Reading

    ERIC Educational Resources Information Center

    Cameron, Claire E.; Grimm, Kevin J.; Steele, Joel S.; Castro-Schilo, Laura; Grissmer, David W.

    2015-01-01

    This study examined achievement trajectories in mathematics and reading from school entry through the end of middle school with linear and nonlinear growth curves in 2 large longitudinal data sets (National Longitudinal Study of Youth--Children and Young Adults and Early Childhood Longitudinal Study--Kindergarten Cohort [ECLS-K]). The S-shaped…

  9. Accounting For Nonlinearity In A Microwave Radiometer

    NASA Technical Reports Server (NTRS)

    Stelzried, Charles T.

    1991-01-01

    Simple mathematical technique found to account adequately for nonlinear component of response of microwave radiometer. Five prescribed temperatures measured to obtain quadratic calibration curve. Temperature assumed to vary quadratically with reading. Concept not limited to radiometric application; applicable to other measuring systems in which relationships between quantities to be determined and readings of instruments differ slightly from linearity.

  10. Fitting milk production curves through nonlinear mixed models.

    PubMed

    Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica

    2017-05-01

    The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.

  11. Nonlinear price impact from linear models

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2017-12-01

    The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.

  12. A nonlinear propagation model-based phase calibration technique for membrane hydrophones.

    PubMed

    Cooling, Martin P; Humphrey, Victor F

    2008-01-01

    A technique for the phase calibration of membrane hydrophones in the frequency range up to 80 MHz is described. This is achieved by comparing measurements and numerical simulation of a nonlinearly distorted test field. The field prediction is obtained using a finite-difference model that solves the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation in the frequency domain. The measurements are made in the far field of a 3.5 MHz focusing circular transducer in which it is demonstrated that, for the high drive level used, spatial averaging effects due to the hydrophone's finite-receive area are negligible. The method provides a phase calibration of the hydrophone under test without the need for a device serving as a phase response reference, but it requires prior knowledge of the amplitude sensitivity at the fundamental frequency. The technique is demonstrated using a 50-microm thick bilaminar membrane hydrophone, for which the results obtained show functional agreement with predictions of a hydrophone response model. Further validation of the results is obtained by application of the response to the measurement of the high amplitude waveforms generated by a modern biomedical ultrasonic imaging system. It is demonstrated that full deconvolution of the calculated complex frequency response of a nonideal hydrophone results in physically realistic measurements of the transmitted waveforms.

  13. A non-linear piezoelectric actuator calibration using N-dimensional Lissajous figure

    NASA Astrophysics Data System (ADS)

    Albertazzi, A.; Viotti, M. R.; Veiga, C. L. N.; Fantin, A. V.

    2016-08-01

    Piezoelectric translators (PZTs) are very often used as phase shifters in interferometry. However, they typically present a non-linear behavior and strong hysteresis. The use of an additional resistive or capacitive sensor make possible to linearize the response of the PZT by feedback control. This approach works well, but makes the device more complex and expensive. A less expensive approach uses a non-linear calibration. In this paper, the authors used data from at least five interferograms to form N-dimensional Lissajous figures to establish the actual relationship between the applied voltages and the resulting phase shifts [1]. N-dimensional Lissajous figures are formed when N sinusoidal signals are combined in an N-dimensional space, where one signal is assigned to each axis. It can be verified that the resulting Ndimensional ellipsis lays in a 2D plane. By fitting an ellipsis equation to the resulting 2D ellipsis it is possible to accurately compute the resulting phase value for each interferogram. In this paper, the relationship between the resulting phase shift and the applied voltage is simultaneously established for a set of 12 increments by a fourth degree polynomial. The results in speckle interferometry show that, after two or three interactions, the calibration error is usually smaller than 1°.

  14. The strong nonlinear interaction of Tollmien-Schlichting waves and Taylor-Goertler vortices in curved channel flow

    NASA Technical Reports Server (NTRS)

    Bennett, J.; Hall, P.; Smith, F. T.

    1988-01-01

    Viscous fluid flows with curved streamlines can support both centrifugal and viscous traveling wave instabilities. Here the interaction of these instabilities in the context of the fully developed flow in a curved channel is discussed. The viscous (Tollmein-Schlichting) instability is described asymptotically at high Reynolds numbers and it is found that it can induce a Taylor-Goertler flow even at extremely small amplitudes. In this interaction, the Tollmein-Schlichting wave can drive a vortex state with wavelength either comparable with the channel width or the wavelength of lower branch viscous modes. The nonlinear equations which describe these interactions are solved for nonlinear equilibrium states.

  15. Nonlinear Kalman filters for calibration in radio interferometry

    NASA Astrophysics Data System (ADS)

    Tasse, C.

    2014-06-01

    The data produced by the new generation of interferometers are affected by a wide variety of partially unknown complex effects such as pointing errors, phased array beams, ionosphere, troposphere, Faraday rotation, or clock drifts. Most algorithms addressing direction-dependent calibration solve for the effective Jones matrices, and cannot constrain the underlying physical quantities of the radio interferometry measurement equation (RIME). A related difficulty is that they lack robustness in the presence of low signal-to-noise ratios, and when solving for moderate to large numbers of parameters they can be subject to ill-conditioning. These effects can have dramatic consequences in the image plane such as source or even thermal noise suppression. The advantage of solvers directly estimating the physical terms appearing in the RIME is that they can potentially reduce the number of free parameters by orders of magnitudes while dramatically increasing the size of usable data, thereby improving conditioning. We present here a new calibration scheme based on a nonlinear version of the Kalman filter that aims at estimating the physical terms appearing in the RIME. We enrich the filter's structure with a tunable data representation model, together with an augmented measurement model for regularization. Using simulations we show that it can properly estimate the physical effects appearing in the RIME. We found that this approach is particularly useful in the most extreme cases such as when ionospheric and clock effects are simultaneously present. Combined with the ability to provide prior knowledge on the expected structure of the physical instrumental effects (expected physical state and dynamics), we obtain a fairly computationally cheap algorithm that we believe to be robust, especially in low signal-to-noise regimes. Potentially, the use of filters and other similar methods can represent an improvement for calibration in radio interferometry, under the condition that

  16. Implications of Nonlinear Concentration Response Curve for Ozone related Mortality on Health Burden Assessment

    EPA Science Inventory

    We characterize the sensitivity of the ozone attributable health burden assessment with respect to different modeling strategies of concentration-response function. For this purpose, we develop a flexible Bayesian hierarchical model allowing for a nonlinear ozone risk curve with ...

  17. Detecting time-specific differences between temporal nonlinear curves: Analyzing data from the visual world paradigm

    PubMed Central

    Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant

    2015-01-01

    In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088

  18. Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor

    NASA Astrophysics Data System (ADS)

    Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.

    2018-01-01

    Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.

  19. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  20. Calibration of a Fusion Experiment to Investigate the Nuclear Caloric Curve

    NASA Astrophysics Data System (ADS)

    Keeler, Ashleigh

    2017-09-01

    In order to investigate the nuclear equation of state (EoS), the relation between two thermodynamic quantities can be examined. The correlation between the temperature and excitation energy of a nucleus, also known as the caloric curve, has been previously observed in peripheral heavy-ion collisions to exhibit a dependence on the neutron-proton asymmetry. To further investigate this result, fusion reactions (78Kr + 12C and 86Kr + 12C) were measured; the beam energy was varied in the range 15-35 MeV/u in order to vary the excitation energy. The light charged particles (LCPs) evaporated from the compound nucleus were measured in the Si-CsI(TI)/PD detector array FAUST (Forward Array Using Silicon Technology). The LCPs carry information about the temperature. The calibration of FAUST will be described in this presentation. The silicon detectors have resistive surfaces in perpendicular directions to allow position measurement of the LCP's to better than 200 um. The resistive nature requires a position-dependent correction to the energy calibration to take full advantage of the energy resolution. The momentum is calculated from the energy of these particles, and their position on the detectors. A parameterized formula based on the Bethe-Bloch equation was used to straighten the particle identification (PID) lines measured with the dE-E technique. The energy calibration of the CsI detectors is based on the silicon detector energy calibration and the PID. A precision slotted mask enables the relative positions of the detectors to be determined. DOE Grant: DE-FG02-93ER40773 and REU Grant: PHY - 1659847.

  1. The cytokinesis-blocked micronucleus assay: dose-response calibration curve, background frequency in the population and dose estimation.

    PubMed

    Rastkhah, E; Zakeri, F; Ghoranneviss, M; Rajabpour, M R; Farshidpour, M R; Mianji, F; Bayat, M

    2016-03-01

    An in vitro study of the dose responses of human peripheral blood lymphocytes was conducted with the aim of creating calibrated dose-response curves for biodosimetry measuring up to 4 Gy (0.25-4 Gy) of gamma radiation. The cytokinesis-blocked micronucleus (CBMN) assay was employed to obtain the frequencies of micronuclei (MN) per binucleated cell in blood samples from 16 healthy donors (eight males and eight females) in two age ranges of 20-34 and 35-50 years. The data were used to construct the calibration curves for men and women in two age groups, separately. An increase in micronuclei yield with the dose in a linear-quadratic way was observed in all groups. To verify the applicability of the constructed calibration curve, MN yields were measured in peripheral blood lymphocytes of two real overexposed subjects and three irradiated samples with unknown dose, and the results were compared with dose values obtained from measuring dicentric chromosomes. The comparison of the results obtained by the two techniques indicated a good agreement between dose estimates. The average baseline frequency of MN for the 130 healthy non-exposed donors (77 men and 55 women, 20-60 years old divided into four age groups) ranged from 6 to 21 micronuclei per 1000 binucleated cells. Baseline MN frequencies were higher for women and for the older age group. The results presented in this study point out that the CBMN assay is a reliable, easier and valuable alternative method for biological dosimetry.

  2. Nonlinear Analysis and Post-Test Correlation for a Curved PRSEUS Panel

    NASA Technical Reports Server (NTRS)

    Gould, Kevin; Lovejoy, Andrew E.; Jegley, Dawn; Neal, Albert L.; Linton, Kim, A.; Bergan, Andrew C.; Bakuckas, John G., Jr.

    2013-01-01

    The Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) concept, developed by The Boeing Company, has been extensively studied as part of the National Aeronautics and Space Administration's (NASA s) Environmentally Responsible Aviation (ERA) Program. The PRSEUS concept provides a light-weight alternative to aluminum or traditional composite design concepts and is applicable to traditional-shaped fuselage barrels and wings, as well as advanced configurations such as a hybrid wing body or truss braced wings. Therefore, NASA, the Federal Aviation Administration (FAA) and The Boeing Company partnered in an effort to assess the performance and damage arrestments capabilities of a PRSEUS concept panel using a full-scale curved panel in the FAA Full-Scale Aircraft Structural Test Evaluation and Research (FASTER) facility. Testing was conducted in the FASTER facility by subjecting the panel to axial tension loads applied to the ends of the panel, internal pressure, and combined axial tension and internal pressure loadings. Additionally, reactive hoop loads were applied to the skin and frames of the panel along its edges. The panel successfully supported the required design loads in the pristine condition and with a severed stiffener. The panel also demonstrated that the PRSEUS concept could arrest the progression of damage including crack arrestment and crack turning. This paper presents the nonlinear post-test analysis and correlation with test results for the curved PRSEUS panel. It is shown that nonlinear analysis can accurately calculate the behavior of a PRSEUS panel under tension, pressure and combined loading conditions.

  3. Nonlinear propagation model for ultrasound hydrophones calibration in the frequency range up to 100 MHz.

    PubMed

    Radulescu, E G; Wójcik, J; Lewin, P A; Nowicki, A

    2003-06-01

    To facilitate the implementation and verification of the new ultrasound hydrophone calibration techniques described in the companion paper (somewhere in this issue) a nonlinear propagation model was developed. A brief outline of the theoretical considerations is presented and the model's advantages and disadvantages are discussed. The results of simulations yielding spatial and temporal acoustic pressure amplitude are also presented and compared with those obtained using KZK and Field II models. Excellent agreement between all models is evidenced. The applicability of the model in discrete wideband calibration of hydrophones is documented in the companion paper somewhere in this volume.

  4. Measuring the nonlinear elastic properties of tissue-like phantoms.

    PubMed

    Erkamp, Ramon Q; Skovoroda, Andrei R; Emelianov, Stanislav Y; O'Donnell, Matthew

    2004-04-01

    A direct mechanical system simultaneously measuring external force and deformation of samples over a wide dynamic range is used to obtain force-displacement curves of tissue-like phantoms under plain strain deformation. These measurements, covering a wide deformation range, then are used to characterize the nonlinear elastic properties of the phantom materials. The model assumes incompressible media, in which several strain energy potentials are considered. Finite-element analysis is used to evaluate the performance of this material characterization procedure. The procedures developed allow calibration of nonlinear elastic phantoms for elasticity imaging experiments and finite-element simulations.

  5. Predicting Madura cattle growth curve using non-linear model

    NASA Astrophysics Data System (ADS)

    Widyas, N.; Prastowo, S.; Widi, T. S. M.; Baliarti, E.

    2018-03-01

    Madura cattle is Indonesian native. It is a composite breed that has undergone hundreds of years of selection and domestication to reach nowadays remarkable uniformity. Crossbreeding has reached the isle of Madura and the Madrasin, a cross between Madura cows and Limousine semen emerged. This paper aimed to compare the growth curve between Madrasin and one type of pure Madura cows, the common Madura cattle (Madura) using non-linear models. Madura cattles are kept traditionally thus reliable records are hardly available. Data were collected from small holder farmers in Madura. Cows from different age classes (<6 months, 6-12 months, 1-2years, 2-3years, 3-5years and >5years) were observed, and body measurements (chest girth, body length and wither height) were taken. In total 63 Madura and 120 Madrasin records obtained. Linear model was built with cattle sub-populations and age as explanatory variables. Body weights were estimated based on the chest girth. Growth curves were built using logistic regression. Results showed that within the same age, Madrasin has significantly larger body compared to Madura (p<0.05). The logistic models fit better for Madura and Madrasin cattle data; with the estimated MSE for these models were 39.09 and 759.28 with prediction accuracy of 99 and 92% for Madura and Madrasin, respectively. Prediction of growth curve using logistic regression model performed well in both types of Madura cattle. However, attempts to administer accurate data on Madura cattle are necessary to better characterize and study these cattle.

  6. Feasibility analysis on integration of luminous environment measuring and design based on exposure curve calibration

    NASA Astrophysics Data System (ADS)

    Zou, Yuan; Shen, Tianxing

    2013-03-01

    Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.

  7. Nonlinear Filtering Effects of Reservoirs on Flood Frequency Curves at the Regional Scale: RESERVOIRS FILTER FLOOD FREQUENCY CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wei; Li, Hong-Yi; Leung, L. Ruby

    Anthropogenic activities, e.g., reservoir operation, may alter the characteristics of Flood Frequency Curve (FFC) and challenge the basic assumption of stationarity used in flood frequency analysis. This paper presents a combined data-modeling analysis of the nonlinear filtering effects of reservoirs on the FFCs over the contiguous United States. A dimensionless Reservoir Impact Index (RII), defined as the total upstream reservoir storage capacity normalized by the annual streamflow volume, is used to quantify reservoir regulation effects. Analyses are performed for 388 river stations with an average record length of 50 years. The first two moments of the FFC, mean annual maximummore » flood (MAF) and coefficient of variations (CV), are calculated for the pre- and post-dam periods and compared to elucidate the reservoir regulation effects as a function of RII. It is found that MAF generally decreases with increasing RII but stabilizes when RII exceeds a threshold value, and CV increases with RII until a threshold value beyond which CV decreases with RII. The processes underlying the nonlinear threshold behavior of MAF and CV are investigated using three reservoir models with different levels of complexity. All models capture the non-linear relationships of MAF and CV with RII, suggesting that the basic flood control function of reservoirs is key to the non-linear relationships. The relative roles of reservoir storage capacity, operation objectives, available storage prior to a flood event, and reservoir inflow pattern are systematically investigated. Our findings may help improve flood-risk assessment and mitigation in regulated river systems at the regional scale.« less

  8. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  9. The nonlinear interaction of Tollmien-Schlichting waves and Taylor-Goertler vortices in curved channel flows

    NASA Technical Reports Server (NTRS)

    Hall, P.; Smith, F. T.

    1987-01-01

    It is known that a viscous fluid flow with curved streamlines can support both Tollmien-Schlichting and Taylor-Goertler instabilities. In a situation where both modes are possible on the basis of linear theory a nonlinear theory must be used to determine the effect of the interaction of the instabilities. The details of this interaction are of practical importance because of its possible catastrophic effects on mechanisms used for laminar flow control. This interaction is studied in the context of fully developed flows in curved channels. A part form technical differences associated with boundary layer growth the structures of the instabilities in this flow are very similar to those in the practically more important external boundary layer situation. The interaction is shown to have two distinct phases depending on the size of the disturbances. At very low amplitudes two oblique Tollmein-Schlichting waves interact with a Goertler vortex in such a manner that the amplitudes become infinite at a finite time. This type of interaction is described by ordinary differential amplitude equations with quadratic nonlinearities.

  10. Experiments on nonlinear acoustic landmine detection: Tuning curve studies of soil-mine and soil-mass oscillators

    NASA Astrophysics Data System (ADS)

    Korman, Murray S.; Witten, Thomas R.; Fenneman, Douglas J.

    2004-10-01

    Donskoy [SPIE Proc. 3392, 211-217 (1998); 3710, 239-246 (1999)] has suggested a nonlinear technique that is insensitive to relatively noncompliant targets that can detect an acoustically compliant buried mine. Airborne sound at two primary frequencies eventually causes interactions with the soil and mine generating combination frequencies that can affect the vibration velocity at the surface. In current experiments, f1 and f2 are closely spaced near a mine resonance and a laser Doppler vibrometer profiles the surface. In profiling, certain combination frequencies have a much greater contrast ratio than the linear profiles at f1 and f2-but off the mine some nonlinearity exists. Near resonance, the bending (a softening) of a family of tuning curves (over the mine) exhibits a linear relationship between peak velocity and corresponding frequency, which is characteristic of nonlinear mesoscopic elasticity effects that are observed in geomaterials like rocks or granular media. Results are presented for inert plastic VS 1.6, VS 2.2 and M14 mines buried 3.6 cm in loose soil. Tuning curves for a rigid mass plate resting on a soil layer exhibit similar results, suggesting that nonresonant conditions off the mine are desirable. [Work supported by U.S. Army RDECOM, CERDEC, NVESD, Fort Belvoir, VA.

  11. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  12. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system.

    PubMed

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-21

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer's Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to

  13. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system

    PubMed Central

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-01

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer’s Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26-cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to

  14. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  15. Biological dosimetry of ionizing radiation: Evaluation of the dose with cytogenetic methodologies by the construction of calibration curves

    NASA Astrophysics Data System (ADS)

    Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia

    2016-09-01

    In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining

  16. Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Everhart, Joel L.

    1987-01-01

    This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.

  17. Improved quantification of important beer quality parameters based on nonlinear calibration methods applied to FT-MIR spectra.

    PubMed

    Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus

    2017-01-01

    During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to

  18. Nonlinear problems of the theory of heterogeneous slightly curved shells

    NASA Technical Reports Server (NTRS)

    Kantor, B. Y.

    1973-01-01

    An account if given of the variational method of the solution of physically and geometrically nonlinear problems of the theory of heterogeneous slightly curved shells. Examined are the bending and supercritical behavior of plates and conical and spherical cupolas of variable thickness in a temperature field, taking into account the dependence of the elastic parameters on temperature. The bending, stability in general and load-bearing capacity of flexible isotropic elastic-plastic shells with different criteria of plasticity, taking into account compressibility and hardening. The effect of the plastic heterogeneity caused by heat treatment, surface work hardening and irradiation by fast neutron flux is investigated. Some problems of the dynamic behavior of flexible shells are solved. Calculations are performed in high approximations. Considerable attention is given to the construction of a machine algorithm and to the checking of the convergence of iterative processes.

  19. Exponential Decay Nonlinear Regression Analysis of Patient Survival Curves: Preliminary Assessment in Non-Small Cell Lung Cancer

    PubMed Central

    Stewart, David J.; Behrens, Carmen; Roth, Jack; Wistuba, Ignacio I.

    2010-01-01

    Background For processes that follow first order kinetics, exponential decay nonlinear regression analysis (EDNRA) may delineate curve characteristics and suggest processes affecting curve shape. We conducted a preliminary feasibility assessment of EDNRA of patient survival curves. Methods EDNRA was performed on Kaplan-Meier overall survival (OS) and time-to-relapse (TTR) curves for 323 patients with resected NSCLC and on OS and progression-free survival (PFS) curves from selected publications. Results and Conclusions In our resected patients, TTR curves were triphasic with a “cured” fraction of 60.7% (half-life [t1/2] >100,000 months), a rapidly-relapsing group (7.4%, t1/2=5.9 months) and a slowly-relapsing group (31.9%, t1/2=23.6 months). OS was uniphasic (t1/2=74.3 months), suggesting an impact of co-morbidities; hence, tumor molecular characteristics would more likely predict TTR than OS. Of 172 published curves analyzed, 72 (42%) were uniphasic, 92 (53%) were biphasic, 8 (5%) were triphasic. With first-line chemotherapy in advanced NSCLC, 87.5% of curves from 2-3 drug regimens were uniphasic vs only 20% of those with best supportive care or 1 drug (p<0.001). 54% of curves from 2-3 drug regimens had convex rapid-decay phases vs 0% with fewer agents (p<0.001). Curve convexities suggest that discontinuing chemotherapy after 3-6 cycles “synchronizes” patient progression and death. With postoperative adjuvant chemotherapy, the PFS rapid-decay phase accounted for a smaller proportion of the population than in controls (p=0.02) with no significant difference in rapid-decay t1/2, suggesting adjuvant chemotherapy may move a subpopulation of patients with sensitive tumors from the relapsing group to the cured group, with minimal impact on time to relapse for a larger group of patients with resistant tumors. In untreated patients, the proportion of patients in the rapid-decay phase increased (p=0.04) while rapid-decay t1/2 decreased (p=0.0004) with increasing

  20. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    PubMed

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  2. Practical calibration curve of small-type optically stimulated luminescence (OSL) dosimeter for evaluation of entrance skin dose in the diagnostic X-ray region.

    PubMed

    Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Kobayashi, Ikuo

    2015-07-01

    For X-ray diagnosis, the proper management of the entrance skin dose (ESD) is important. Recently, a small-type optically stimulated luminescence dosimeter (nanoDot OSL dosimeter) was made commercially available by Landauer, and it is hoped that it will be used for ESD measurements in clinical settings. Our objectives in the present study were to propose a method for calibrating the ESD measured with the nanoDot OSL dosimeter and to evaluate its accuracy. The reference ESD is assumed to be based on an air kerma with consideration of a well-known back scatter factor. We examined the characteristics of the nanoDot OSL dosimeter using two experimental conditions: a free air irradiation to derive the air kerma, and a phantom experiment to determine the ESD. For evaluation of the ability to measure the ESD, a calibration curve for the nanoDot OSL dosimeter was determined in which the air kerma and/or the ESD measured with an ionization chamber were used as references. As a result, we found that the calibration curve for the air kerma was determined with an accuracy of 5 %. Furthermore, the calibration curve was applied to the ESD estimation. The accuracy of the ESD obtained was estimated to be 15 %. The origin of these uncertainties was examined based on published papers and Monte-Carlo simulation. Most of the uncertainties were caused by the systematic uncertainty of the reading system and the differences in efficiency corresponding to different X-ray energies.

  3. Scaling the Non-linear Impact Response of Flat and Curved Composite Panels

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Chunchu, Prasad B.; Rose, Cheryl A.; Feraboli, Paolo; Jackson, Wade C.

    2005-01-01

    The application of scaling laws to thin flat and curved composite panels exhibiting nonlinear response when subjected to low-velocity transverse impact is investigated. Previous research has shown that the elastic impact response of structural configurations exhibiting geometrically linear response can be effectively scaled. In the present paper, a preliminary experimental study is presented to assess the applicability of the scaling laws to structural configurations exhibiting geometrically nonlinear deformations. The effect of damage on the scalability of the structural response characteristics, and the effect of scale on damage development are also investigated. Damage is evaluated using conventional methods including C-scan, specimen de-plying and visual inspection of the impacted panels. Coefficient of restitution and normalized contact duration are also used to assess the extent of damage. The results confirm the validity of the scaling parameters for elastic impacts. However, for the panels considered in the study, the extent and manifestation of damage do not scale according to the scaling laws. Furthermore, the results indicate that even though the damage does not scale, the overall panel response characteristics, as indicated by contact force profiles, do scale for some levels of damage.

  4. New approach to calibrating bed load samplers

    USGS Publications Warehouse

    Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.

    1985-01-01

    Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.

  5. Computational Methodology for Absolute Calibration Curves for Microfluidic Optical Analyses

    PubMed Central

    Chang, Chia-Pin; Nagel, David J.; Zaghloul, Mona E.

    2010-01-01

    Optical fluorescence and absorption are two of the primary techniques used for analytical microfluidics. We provide a thorough yet tractable method for computing the performance of diverse optical micro-analytical systems. Sample sizes range from nano- to many micro-liters and concentrations from nano- to milli-molar. Equations are provided to trace quantitatively the flow of the fundamental entities, namely photons and electrons, and the conversion of energy from the source, through optical components, samples and spectral-selective components, to the detectors and beyond. The equations permit facile computations of calibration curves that relate the concentrations or numbers of molecules measured to the absolute signals from the system. This methodology provides the basis for both detailed understanding and improved design of microfluidic optical analytical systems. It saves prototype turn-around time, and is much simpler and faster to use than ray tracing programs. Over two thousand spreadsheet computations were performed during this study. We found that some design variations produce higher signal levels and, for constant noise levels, lower minimum detection limits. Improvements of more than a factor of 1,000 were realized. PMID:22163573

  6. Correction for isotopic interferences between analyte and internal standard in quantitative mass spectrometry by a nonlinear calibration function.

    PubMed

    Rule, Geoffrey S; Clark, Zlatuse D; Yue, Bingfang; Rockwood, Alan L

    2013-04-16

    Stable isotope-labeled internal standards are of great utility in providing accurate quantitation in mass spectrometry (MS). An implicit assumption has been that there is no "cross talk" between signals of the internal standard and the target analyte. In some cases, however, naturally occurring isotopes of the analyte do contribute to the signal of the internal standard. This phenomenon becomes more pronounced for isotopically rich compounds, such as those containing sulfur, chlorine, or bromine, higher molecular weight compounds, and those at high analyte/internal standard concentration ratio. This can create nonlinear calibration behavior that may bias quantitative results. Here, we propose the use of a nonlinear but more accurate fitting of data for these situations that incorporates one or two constants determined experimentally for each analyte/internal standard combination and an adjustable calibration parameter. This fitting provides more accurate quantitation in MS-based assays where contributions from analyte to stable labeled internal standard signal exist. It can also correct for the reverse situation where an analyte is present in the internal standard as an impurity. The practical utility of this approach is described, and by using experimental data, the approach is compared to alternative fits.

  7. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    PubMed Central

    Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  8. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  9. Prediction of hydrographs and flow-duration curves in almost ungauged catchments: Which runoff measurements are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Pool, Sandra; Viviroli, Daniel; Seibert, Jan

    2017-11-01

    Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve

  10. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a

  11. Self-calibrating multiplexer circuit

    DOEpatents

    Wahl, Chris P.

    1997-01-01

    A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.

  12. Nonlinear elasticity in resonance experiments

    NASA Astrophysics Data System (ADS)

    Li, Xun; Sens-Schönfelder, Christoph; Snieder, Roel

    2018-04-01

    Resonant bar experiments have revealed that dynamic deformation induces nonlinearity in rocks. These experiments produce resonance curves that represent the response amplitude as a function of the driving frequency. We propose a model to reproduce the resonance curves with observed features that include (a) the log-time recovery of the resonant frequency after the deformation ends (slow dynamics), (b) the asymmetry in the direction of the driving frequency, (c) the difference between resonance curves with the driving frequency that is swept upward and downward, and (d) the presence of a "cliff" segment to the left of the resonant peak under the condition of strong nonlinearity. The model is based on a feedback cycle where the effect of softening (nonlinearity) feeds back to the deformation. This model provides a unified interpretation of both the nonlinearity and slow dynamics in resonance experiments. We further show that the asymmetry of the resonance curve is caused by the softening, which is documented by the decrease of the resonant frequency during the deformation; the cliff segment of the resonance curve is linked to a bifurcation that involves a steep change of the response amplitude when the driving frequency is changed. With weak nonlinearity, the difference between the upward- and downward-sweeping curves depends on slow dynamics; a sufficiently slow frequency sweep eliminates this up-down difference. With strong nonlinearity, the up-down difference results from both the slow dynamics and bifurcation; however, the presence of the bifurcation maintains the respective part of the up-down difference, regardless of the sweep rate.

  13. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  14. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  15. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  16. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  17. Learning curve of single port laparoscopic cholecystectomy determined using the non-linear ordinary least squares method based on a non-linear regression model: An analysis of 150 consecutive patients.

    PubMed

    Han, Hyung Joon; Choi, Sae Byeol; Park, Man Sik; Lee, Jin Suk; Kim, Wan Bae; Song, Tae Jin; Choi, Sang Yong

    2011-07-01

    Single port laparoscopic surgery has come to the forefront of minimally invasive surgery. For those familiar with conventional techniques, however, this type of operation demands a different type of eye/hand coordination and involves unfamiliar working instruments. Herein, the authors describe the learning curve and the clinical outcomes of single port laparoscopic cholecystectomy for 150 consecutive patients with benign gallbladder disease. All patients underwent single port laparoscopic cholecystectomy using a homemade glove port by one of five operators with different levels of experiences of laparoscopic surgery. The learning curve for each operator was fitted using the non-linear ordinary least squares method based on a non-linear regression model. Mean operating time was 77.6 ± 28.5 min. Fourteen patients (6.0%) were converted to conventional laparoscopic cholecystectomy. Complications occurred in 15 patients (10.0%), as follows: bile duct injury (n = 2), surgical site infection (n = 8), seroma (n = 2), and wound pain (n = 3). One operator achieved a learning curve plateau at 61.4 min per procedure after 8.5 cases and his time improved by 95.3 min as compared with initial operation time. Younger surgeons showed significant decreases in mean operation time and achieved stable mean operation times. In particular, younger surgeons showed significant decreases in operation times after 20 cases. Experienced laparoscopic surgeons can safely perform single port laparoscopic cholecystectomy using conventional or angled laparoscopic instruments. The present study shows that an operator can overcome the single port laparoscopic cholecystectomy learning curve in about eight cases.

  18. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  19. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  20. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  1. Dependency of EBT2 film calibration curve on postirradiation time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Liyun, E-mail: liyunc@isu.edu.tw; Ding, Hueisch-Jy; Ho, Sheng-Yow

    2014-02-15

    Purpose: The Ashland Inc. product EBT2 film model is a widely used quality assurance tool, especially for verification of 2-dimensional dose distributions. In general, the calibration film and the dose measurement film are irradiated, scanned, and calibrated at the same postirradiation time (PIT), 1-2 days after the films are irradiated. However, for a busy clinic or in some special situations, the PIT for the dose measurement film may be different from that of the calibration film. In this case, the measured dose will be incorrect. This paper proposed a film calibration method that includes the effect of PIT. Methods: Themore » dose versus film optical density was fitted to a power function with three parameters. One of these parameters was PIT dependent, while the other two were found to be almost constant with a standard deviation of the mean less than 4%. The PIT-dependent parameter was fitted to another power function of PIT. The EBT2 film model was calibrated using the PDD method with 14 different PITs ranging from 1 h to 2 months. Ten of the fourteen PITs were used for finding the fitting parameters, and the other four were used for testing the model. Results: The verification test shows that the differences between the delivered doses and the film doses calculated with this modeling were mainly within 2% for delivered doses above 60 cGy, and the total uncertainties were generally under 5%. The errors and total uncertainties of film dose calculation were independent of the PIT using the proposed calibration procedure. However, the fitting uncertainty increased with decreasing dose or PIT, but stayed below 1.3% for this study. Conclusions: The EBT2 film dose can be modeled as a function of PIT. For the ease of routine calibration, five PITs were suggested to be used. It is recommended that two PITs be located in the fast developing period (1∼6 h), one in 1 ∼ 2 days, one around a week, and one around a month.« less

  2. On the nonlinear stability of the unsteady, viscous flow of an incompressible fluid in a curved pipe

    NASA Technical Reports Server (NTRS)

    Shortis, Trudi A.; Hall, Philip

    1995-01-01

    The stability of the flow of an incompressible, viscous fluid through a pipe of circular cross-section curved about a central axis is investigated in a weakly nonlinear regime. A sinusoidal pressure gradient with zero mean is imposed, acting along the pipe. A WKBJ perturbation solution is constructed, taking into account the need for an inner solution in the vicinity of the outer bend, which is obtained by identifying the saddle point of the Taylor number in the complex plane of the cross-sectional angle co-ordinate. The equation governing the nonlinear evolution of the leading order vortex amplitude is thus determined. The stability analysis of this flow to periodic disturbances leads to a partial differential system dependent on three variables, and since the differential operators in this system are periodic in time, Floquet theory may be applied to reduce this system to a coupled infinite system of ordinary differential equations, together with homogeneous uncoupled boundary conditions. The eigenvalues of this system are calculated numerically to predict a critical Taylor number consistent with the analysis of Papageorgiou. A discussion of how nonlinear effects alter the linear stability analysis is also given, and the nature of the instability determined.

  3. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  4. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  5. GIADA: extended calibration activity: . the Electrostatic Micromanipulator

    NASA Astrophysics Data System (ADS)

    Sordini, R.; Accolla, M.; Della Corte, V.; Rotundi, A.

    GIADA (Grain Impact Analyser and Dust Accumulator), one of the scientific instruments onboard Rosetta/ESA space mission, is devoted to study dynamical properties of dust particles ejected by the short period comet 67P/Churyumov-Gerasimenko. In preparation for the scientific phase of the mission, we are performing laboratory calibration activities on the GIADA Proto Flight Model (PFM), housed in a clean room in our laboratory. Aim of the calibration activity is to characterize the response curve of the GIADA measurement sub-systems. These curves are then correlated with the calibration curves obtained for the GIADA payload onboard the Rosetta S/C. The calibration activity involves two of three sub-systems constituting GIADA: Grain Detection System (GDS) and Impact Sensor (IS). To get reliable calibration curves, a statistically relevant number of grains have to be dropped or shot into the GIADA instrument. Particle composition, structure, size, optical properties and porosity have been selected in order to obtain realistic cometary dust analogues. For each selected type of grain, we estimated that at least one hundred of shots are needed to obtain a calibration curve. In order to manipulate such a large number of particles, we have designed and developed an innovative electrostatic system able to capture, manipulate and shoot particles with sizes in the range 20 - 500 μm. The electrostatic Micromanipulator (EM) is installed on a manual handling system composed by X-Y-Z micrometric slides with a 360o rotational stage along Z, and mounted on a optical bench. In the present work, we display the tests on EM using ten different materials with dimension in the range 50 - 500 μm: the experimental results are in compliance with the requirements.

  6. Nonlinear Growth Models in M"plus" and SAS

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam

    2009-01-01

    Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…

  7. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torello, David; Kim, Jin-Yeon; Qu, Jianmin

    2015-03-31

    This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less

  8. Experimental Determination of the HPGe Spectrometer Efficiency Calibration Curves for Various Sample Geometry for Gamma Energy from 50 keV to 2000 keV

    NASA Astrophysics Data System (ADS)

    Saat, Ahmad; Hamzah, Zaini; Yusop, Mohammad Fariz; Zainal, Muhd Amiruddin

    2010-07-01

    Detection efficiency of a gamma-ray spectrometry system is dependent upon among others, energy, sample and detector geometry, volume and density of the samples. In the present study the efficiency calibration curves of newly acquired (August 2008) HPGe gamma-ray spectrometry system was carried out for four sample container geometries, namely Marinelli beaker, disc, cylindrical beaker and vial, normally used for activity determination of gamma-ray from environmental samples. Calibration standards were prepared by using known amount of analytical grade uranium trioxide ore, homogenized in plain flour into the respective containers. The ore produces gamma-rays of energy ranging from 53 keV to 1001 keV. Analytical grade potassium chloride were prepared to determine detection efficiency of 1460 keV gamma-ray emitted by potassium isotope K-40. Plots of detection efficiency against gamma-ray energy for the four sample geometries were found to fit smoothly to a general form of ɛ = AΕa+BΕb, where ɛ is efficiency, Ε is energy in keV, A, B, a and b are constants that are dependent on the sample geometries. All calibration curves showed the presence of a "knee" at about 180 keV. Comparison between the four geometries showed that the efficiency of Marinelli beaker is higher than cylindrical beaker and vial, while cylindrical disk showed the lowest.

  9. Calibration of thermocouple psychrometers and moisture measurements in porous materials

    NASA Astrophysics Data System (ADS)

    Guz, Łukasz; Sobczuk, Henryk; Połednik, Bernard; Guz, Ewa

    2016-07-01

    The paper presents in situ method of peltier psychrometric sensors calibration which allow to determine water potential. Water potential can be easily recalculated into moisture content of the porous material. In order to obtain correct results of water potential, each probe should be calibrated. NaCl salt solutions with molar concentration of 0.4M, 0.7M, 1.0M and 1.4M, were used for calibration which enabled to obtain osmotic potential in range: -1791 kPa to -6487 kPa. Traditionally, the value of voltage generated on thermocouples during wet-bulb temperature depression is calculated in order to determine the calibration function for psychrometric in situ sensors. In the new method of calibration, the field under psychrometric curve along with peltier cooling current and duration was taken into consideration. During calibration, different cooling currents were applied for each salt solution, i.e. 3, 5, 8 mA respectively, as well as different cooling duration for each current (from 2 to 100 sec with 2 sec step). Afterwards, the shape of each psychrometric curve was thoroughly examined and a value of field under psychrometric curve was computed. Results of experiment indicate that there is a robust correlation between field under psychrometric curve and water potential. Calibrations formulas were designated on the basis of these features.

  10. Receiver calibration and the nonlinearity parameter measurement of thick solid samples with diffraction and attenuation corrections.

    PubMed

    Jeong, Hyunjo; Barnard, Daniel; Cho, Sungjong; Zhang, Shuzeng; Li, Xiongbing

    2017-11-01

    This paper presents analytical and experimental techniques for accurate determination of the nonlinearity parameter (β) in thick solid samples. When piezoelectric transducers are used for β measurements, the receiver calibration is required to determine the transfer function from which the absolute displacement can be calculated. The measured fundamental and second harmonic displacement amplitudes should be modified to account for beam diffraction and material absorption. All these issues are addressed in this study and the proposed technique is validated through the β measurements of thick solid samples. A simplified self-reciprocity calibration procedure for a broadband receiver is described. The diffraction and attenuation corrections for the fundamental and second harmonics are explicitly derived. Aluminum alloy samples in five different thicknesses (4, 6, 8, 10, 12cm) are prepared and β measurements are made using the finite amplitude, through-transmission method. The effects of diffraction and attenuation corrections on β measurements are systematically investigated. When diffraction and attenuation corrections are all properly made, the variation of β between different thickness samples is found to be less than 3.2%. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Efficient gradient calibration based on diffusion MRI.

    PubMed

    Teh, Irvin; Maguire, Mahon L; Schneider, Jürgen E

    2017-01-01

    To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. The errors in apparent diffusion coefficients along orthogonal axes ranged from -9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and -0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from -5.5% to + 4.5% precalibration and were likewise reduced to -0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170-179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. © 2016 Wiley Periodicals, Inc.

  12. Efficient gradient calibration based on diffusion MRI

    PubMed Central

    Teh, Irvin; Maguire, Mahon L.

    2016-01-01

    Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277

  13. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  14. Static SPME sampling of VOCs emitted from indoor building materials: prediction of calibration curves of single compounds for two different emission cells.

    PubMed

    Mocho, Pierre; Desauziers, Valérie

    2011-05-01

    Solid-phase microextraction (SPME) is a powerful technique, easy to implement for on-site static sampling of indoor VOCs emitted by building materials. However, a major constraint lies in the establishment of calibration curves which requires complex generation of standard atmospheres. Thus, the purpose of this paper is to propose a model to predict adsorption kinetics (i.e., calibration curves) of four model VOCs. The model is based on Fick's laws for the gas phase and on the equilibrium or the solid diffusion model for the adsorptive phase. Two samplers (the FLEC® and a home-made cylindrical emission cell), coupled to SPME for static sampling of material emissions, were studied. A good agreement between modeling and experimental data is observed and results show the influence of sampling rate on mass transfer mode in function of sample volume. The equilibrium model is adapted to quite large volume sampler (cylindrical cell) while the solid diffusion model is dedicated to small volume sampler (FLEC®). The limiting steps of mass transfer are the diffusion in gas phase for the cylindrical cell and the pore surface diffusion for the FLEC®. In the future, this modeling approach could be a useful tool for time-saving development of SPME to study building material emission in static mode sampling.

  15. Continuous functional magnetic resonance imaging reveals dynamic nonlinearities of "dose-response" curves for finger opposition.

    PubMed

    Berns, G S; Song, A W; Mao, H

    1999-07-15

    Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.

  16. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  17. Nonlinear bias analysis and correction of microwave temperature sounder observations for FY-3C meteorological satellite

    NASA Astrophysics Data System (ADS)

    Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin

    2018-01-01

    The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.

  18. Nonlinear Dynamic of Curved Railway Tracks in Three-Dimensional Space

    NASA Astrophysics Data System (ADS)

    Liu, X.; Ngamkhanong, C.; Kaewunruen, S.

    2017-12-01

    On curved tracks, high-pitch noise pollution can often be a considerable concern of rail asset owners, commuters, and people living or working along the rail corridor. Inevitably, wheel/rail interface can cause a traveling source of sound and vibration, which spread over a long distance of rail network. The sound and vibration can be in various forms and spectra. The undesirable sound and vibration on curves is often called ‘noise,’ includes flanging and squealing noises. This paper focuses on the squeal noise phenomena on curved tracks located in urban environments. It highlights the effect of curve radii on lateral track dynamics. It is important to note that rail freight curve noises, especially for curve squeals, can be observed almost everywhere and every type of track structures. The most pressing noise appears at sharper curved tracks where excessive lateral wheel/rail dynamics resonate with falling friction states, generating a tonal noise problem, so-call ‘squeal’. Many researchers have carried out measurements and simulations to understand the actual root causes of the squeal noise. Most researchers believe that wheel resonance over falling friction is the main cause, whilst a few others think that dynamic mode coupling of wheel and rail may also cause the squeal. Therefore, this paper is devoted to systems thinking the approach and dynamic assessment in resolving railway curve noise problems. The simulations of railway tracks with different curve radii will be carried out to develop state-of-the-art understanding into lateral track dynamics, including rail dynamics, cant dynamics, gauge dynamics and overall track responses.

  19. Linear and nonlinear trending and prediction for AVHRR time series data

    NASA Technical Reports Server (NTRS)

    Smid, J.; Volf, P.; Slama, M.; Palus, M.

    1995-01-01

    The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.

  20. Gold Nanoparticle-Aptamer-Based LSPR Sensing of Ochratoxin A at a Widened Detection Range by Double Calibration Curve Method

    NASA Astrophysics Data System (ADS)

    Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin

    2018-04-01

    Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium, and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a “double calibration curve” method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.

  1. Nonlinear radiative heat transfer and Hall effects on a viscous fluid in a semi-porous curved channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, Z.; Naveed, M., E-mail: rana.m.naveed@gmail.com; Sajid, M.

    In this paper, effects of Hall currents and nonlinear radiative heat transfer in a viscous fluid passing through a semi-porous curved channel coiled in a circle of radius R are analyzed. A curvilinear coordinate system is used to develop the mathematical model of the considered problem in the form partial differential equations. Similarity solutions of the governing boundary value problems are obtained numerically using shooting method. The results are also validated with the well-known finite difference technique known as the Keller-Box method. The analysis of the involved pertinent parameters on the velocity and temperature distributions is presented through graphs andmore » tables.« less

  2. Quantitative evaluation method for nonlinear characteristics of piezoelectric transducers under high stress with complex nonlinear elastic constant

    NASA Astrophysics Data System (ADS)

    Miyake, Susumu; Kasashima, Takashi; Yamazaki, Masato; Okimura, Yasuyuki; Nagata, Hajime; Hosaka, Hiroshi; Morita, Takeshi

    2018-07-01

    The high power properties of piezoelectric transducers were evaluated considering a complex nonlinear elastic constant. The piezoelectric LCR equivalent circuit with nonlinear circuit parameters was utilized to measure them. The deformed admittance curve of piezoelectric transducers was measured under a high stress and the complex nonlinear elastic constant was calculated by curve fitting. Transducers with various piezoelectric materials, Pb(Zr,Ti)O3, (K,Na)NbO3, and Ba(Zr,Ti)O3–(Ba,Ca)TiO3, were investigated by the proposed method. The measured complex nonlinear elastic constant strongly depends on the linear elastic and piezoelectric constants. This relationship indicates that piezoelectric high power properties can be controlled by modifying the linear elastic and piezoelectric constants.

  3. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  4. Z-scan theory for nonlocal nonlinear media with simultaneous nonlinear refraction and nonlinear absorption.

    PubMed

    Rashidian Vaziri, Mohammad Reza

    2013-07-10

    In this paper, the Z-scan theory for nonlocal nonlinear media has been further developed when nonlinear absorption and nonlinear refraction appear simultaneously. To this end, the nonlinear photoinduced phase shift between the impinging and outgoing Gaussian beams from a nonlocal nonlinear sample has been generalized. It is shown that this kind of phase shift will reduce correctly to its known counterpart for the case of pure refractive nonlinearity. Using this generalized form of phase shift, the basic formulas for closed- and open-aperture beam transmittances in the far field have been provided, and a simple procedure for interpreting the Z-scan results has been proposed. In this procedure, by separately performing open- and closed-aperture Z-scan experiments and using the represented relations for the far-field transmittances, one can measure the nonlinear absorption coefficient and nonlinear index of refraction as well as the order of nonlocality. Theoretically, it is shown that when the absorptive nonlinearity is present in addition to the refractive nonlinearity, the sample nonlocal response can noticeably suppress the peak and enhance the valley of the Z-scan closed-aperture transmittance curves, which is due to the nonlocal action's ability to change the beam transverse dimensions.

  5. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  6. Calibration of a modified temperature-light intensity logger for quantifying water electrical conductivity

    NASA Astrophysics Data System (ADS)

    Gillman, M. A.; Lamoureux, S. F.; Lafrenière, M. J.

    2017-09-01

    The Stream Temperature, Intermittency, and Conductivity (STIC) electrical conductivity (EC) logger as presented by Chapin et al. (2014) serves as an inexpensive (˜50 USD) means to assess relative EC in freshwater environments. This communication demonstrates the calibration of the STIC logger for quantifying EC, and provides examples from a month long field deployment in the High Arctic. Calibration models followed multiple nonlinear regression and produced calibration curves with high coefficient of determination values (R2 = 0.995 - 0.998; n = 5). Percent error of mean predicted specific conductance at 25°C (SpC) to known SpC ranged in magnitude from -0.6% to 13% (mean = -1.4%), and mean absolute percent error (MAPE) ranged from 2.1% to 13% (mean = 5.3%). Across all tested loggers we found good accuracy and precision, with both error metrics increasing with increasing SpC values. During 10, month-long field deployments, there were no logger failures and full data recovery was achieved. Point SpC measurements at the location of STIC loggers recorded via a more expensive commercial electrical conductivity logger followed similar trends to STIC SpC records, with 1:1.05 and 1:1.08 relationships between the STIC and commercial logger SpC values. These results demonstrate that STIC loggers calibrated to quantify EC are an economical means to increase the spatiotemporal resolution of water quality investigations.

  7. Pareto optimal calibration of highly nonlinear reactive transport groundwater models using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Prommer, H.; Welter, D.

    2014-12-01

    Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site

  8. A new method for automated dynamic calibration of tipping-bucket rain gauges

    USGS Publications Warehouse

    Humphrey, M.D.; Istok, J.D.; Lee, J.Y.; Hevesi, J.A.; Flint, A.L.

    1997-01-01

    Existing methods for dynamic calibration of tipping-bucket rain gauges (TBRs) can be time consuming and labor intensive. A new automated dynamic calibration system has been developed to calibrate TBRs with minimal effort. The system consists of a programmable pump, datalogger, digital balance, and computer. Calibration is performed in two steps: 1) pump calibration and 2) rain gauge calibration. Pump calibration ensures precise control of water flow rates delivered to the rain gauge funnel; rain gauge calibration ensures precise conversion of bucket tip times to actual rainfall rates. Calibration of the pump and one rain gauge for 10 selected pump rates typically requires about 8 h. Data files generated during rain gauge calibration are used to compute rainfall intensities and amounts from a record of bucket tip times collected in the field. The system was tested using 5 types of commercial TBRs (15.2-, 20.3-, and 30.5-cm diameters; 0.1-, 0.2-, and 1.0-mm resolutions) and using 14 TBRs of a single type (20.3-cm diameter; 0.1-mm resolution). Ten pump rates ranging from 3 to 154 mL min-1 were used to calibrate the TBRs and represented rainfall rates between 6 and 254 mm h-1 depending on the rain gauge diameter. All pump calibration results were very linear with R2 values greater than 0.99. All rain gauges exhibited large nonlinear underestimation errors (between 5% and 29%) that decreased with increasing rain gauge resolution and increased with increasing rainfall rate, especially for rates greater than 50 mm h-1. Calibration curves of bucket tip time against the reciprocal of the true pump rate for all rain gauges also were linear with R2 values of 0.99. Calibration data for the 14 rain gauges of the same type were very similar, as indicated by slope values that were within 14% of each other and ranged from about 367 to 417 s mm h-1. The developed system can calibrate TBRs efficiently, accurately, and virtually unattended and could be modified for use with other

  9. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  10. Thickness Gauging of Single-Layer Conductive Materials with Two-Point Non Linear Calibration Algorithm

    NASA Technical Reports Server (NTRS)

    Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)

    1998-01-01

    A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.

  11. Conversion of calibration curves for accurate estimation of molecular weight averages and distributions of polyether polyols by conventional size exclusion chromatography.

    PubMed

    Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M

    2018-08-17

    Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.

  12. Non-linear Growth Models in Mplus and SAS

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  13. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  14. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  15. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  16. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  17. Assessment of opacimeter calibration according to International Standard Organization 10155.

    PubMed

    Gomes, J F

    2001-01-01

    This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.

  18. Nonlinear Constitutive Modeling of Piezoelectric Ceramics

    NASA Astrophysics Data System (ADS)

    Xu, Jia; Li, Chao; Wang, Haibo; Zhu, Zhiwen

    2017-12-01

    Nonlinear constitutive modeling of piezoelectric ceramics is discussed in this paper. Van der Pol item is introduced to explain the simple hysteretic curve. Improved nonlinear difference items are used to interpret the hysteresis phenomena of piezoelectric ceramics. The fitting effect of the model on experimental data is proved by the partial least-square regression method. The results show that this method can describe the real curve well. The results of this paper are helpful to piezoelectric ceramics constitutive modeling.

  19. Calibration of the advanced microwave sounding unit-A for NOAA-K

    NASA Technical Reports Server (NTRS)

    Mo, Tsan

    1995-01-01

    The thermal-vacuum chamber calibration data from the Advanced Microwave Sounding Unit-A (AMSU-A) for NOAA-K, which will be launched in 1996, were analyzed to evaluate the instrument performance, including calibration accuracy, nonlinearity, and temperature sensitivity. The AMSU-A on NOAA-K consists of AMSU-A2 Protoflight Model and AMSU-A1 Flight Model 1. The results show that both models meet the instrument specifications, except the AMSU-A1 antenna beamwidths, which exceed the requirement of 3.3 +/- 10%. We also studied the instrument's radiometric characterizations which will be incorporated into the operational calibration algorithm for processing the in-orbit AMSU-A data from space. Particularly, the nonlinearity parameters which will be used for correcting the nonlinear contributions from an imperfect square-law detector were determined from this data analysis. It was found that the calibration accuracies (differences between the measured scene radiances and those calculated from a linear two-point calibration formula) are polarization-dependent. Channels with vertical polarizations show little cold biases at the lowest scene target temperature 84K, while those with horizontal polarizations all have appreciable cold biases, which can be up to 0.6K. It is unknown where these polarization-dependent cold biases originate, but it is suspected that some chamber contamination of hot radiances leaked into the cold scene target area. Further investigation in this matter is required. The existence and magnitude of nonlinearity in each channel were established and a quadratic formula for modeling these nonlinear contributions was developed. The model was characterized by a single parameter u, values of which were obtained for each channel via least-squares fit to the data. Using the best-fit u values, we performed a series of simulations of the quadratic corrections which would be expected from the space data after the launch of AMSU-A on NOAA-K. In these simulations

  20. Techniques for precise energy calibration of particle pixel detectors

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Campbell-Ricketts, T.; Bahadori, A.; Empl, A.

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  1. Techniques for precise energy calibration of particle pixel detectors.

    PubMed

    Kroupa, M; Campbell-Ricketts, T; Bahadori, A; Empl, A

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  2. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  3. Novel Approach for Prediction of Localized Necking in Case of Nonlinear Strain Paths

    NASA Astrophysics Data System (ADS)

    Drotleff, K.; Liewald, M.

    2017-09-01

    Rising customer expectations regarding design complexity and weight reduction of sheet metal components alongside with further reduced time to market implicate increased demand for process validation using numerical forming simulation. Formability prediction though often is still based on the forming limit diagram first presented in the 1960s. Despite many drawbacks in case of nonlinear strain paths and major advances in research in the recent years, the forming limit curve (FLC) is still one of the most commonly used criteria for assessing formability of sheet metal materials. Especially when forming complex part geometries nonlinear strain paths may occur, which cannot be predicted using the conventional FLC-Concept. In this paper a novel approach for calculation of FLCs for nonlinear strain paths is presented. Combining an interesting approach for prediction of FLC using tensile test data and IFU-FLC-Criterion a model for prediction of localized necking for nonlinear strain paths can be derived. Presented model is purely based on experimental tensile test data making it easy to calibrate for any given material. Resulting prediction of localized necking is validated using an experimental deep drawing specimen made of AA6014 material having a sheet thickness of 1.04 mm. The results are compared to IFU-FLC-Criterion based on data of pre-stretched Nakajima specimen.

  4. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  5. Antenna Calibration and Measurement Equipment

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  6. Bandwidth increasing mechanism by introducing a curve fixture to the cantilever generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Weiqun, E-mail: weiqunliu@home.swjtu.edu.cn; Liu, Congzhi; Ren, Bingyu

    2016-07-25

    A nonlinear wideband generator architecture by clamping the cantilever beam generator with a curve fixture is proposed. Devices with different nonlinear stiffness can be obtained by properly choosing the fixture curve according to the design requirements. Three available generator types are presented and discussed for polynomial curves. Experimental investigations show that the proposed mechanism effectively extends the operation bandwidth with good power performance. Especially, the simplicity and easy feasibility allow the mechanism to be widely applied for vibration generators in different scales and environments.

  7. Marine04 Marine radiocarbon age calibration, 26 ? 0 ka BP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughen, K; Baille, M; Bard, E

    2004-11-01

    New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration datasets extend an additional 2000 years, from 0-26 ka cal BP (Before Present, 0 cal BP = AD 1950), and provide much higher resolution, greater precision and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically dated tree-ring samples, converted with a box-diffusion model to marine mixed-layer ages, cover the period from 0-10.5 ka cal BP. Beyond 10.5 ka cal BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals.more » The marine records are corrected with site-specific {sup 14}C reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 ka cal BP. A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the radiocarbon age to calculate the underlying calibration curve. The marine datasets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring datasets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al.« less

  8. Definition of energy-calibrated spectra for national reachback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunz, Christopher L.; Hertz, Kristin L.

    2014-01-01

    Accurate energy calibration is critical for the timeliness and accuracy of analysis results of spectra submitted to National Reachback, particularly for the detection of threat items. Many spectra submitted for analysis include either a calibration spectrum using 137Cs or no calibration spectrum at all. The single line provided by 137Cs is insufficient to adequately calibrate nonlinear spectra. A calibration source that provides several lines that are well-spaced, from the low energy cutoff to the full energy range of the detector, is needed for a satisfactory energy calibration. This paper defines the requirements of an energy calibration for the purposes ofmore » National Reachback, outlines a method to validate whether a given spectrum meets that definition, discusses general source considerations, and provides a specific operating procedure for calibrating the GR-135.« less

  9. Auto calibration of a cone-beam-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross, Daniel; Heil, Ulrich; Schulze, Ralf

    2012-10-15

    Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferablymore » form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT

  10. Disease risk curves.

    PubMed

    Hughes, G; Burnett, F J; Havis, N D

    2013-11-01

    Disease risk curves are simple graphical relationships between the probability of need for treatment and evidence related to risk factors. In the context of the present article, our focus is on factors related to the occurrence of disease in crops. Risk is the probability of adverse consequences; specifically in the present context it denotes the chance that disease will reach a threshold level at which crop protection measures can be justified. This article describes disease risk curves that arise when risk is modeled as a function of more than one risk factor, and when risk is modeled as a function of a single factor (specifically the level of disease at an early disease assessment). In both cases, disease risk curves serve as calibration curves that allow the accumulated evidence related to risk to be expressed on a probability scale. When risk is modeled as a function of the level of disease at an early disease assessment, the resulting disease risk curve provides a crop loss assessment model in which the downside is denominated in terms of risk rather than in terms of yield loss.

  11. The effect of breed and parity on curves of body condition during lactation estimated using a non-linear function.

    PubMed

    Friggens, N C; Badsberg, J H

    2007-05-01

    The objectives of this study were to see if the body condition score curve during lactation could be described using a model amenable to biological interpretation, a non-linear function assuming exponential rates of change in body condition with time, and to quantify the effect of breed and parity on curves of body condition during lactation. Three breeds were represented: Danish Holstein (n = 112), Danish Red (n = 97) and Jerseys (n = 8). Cows entered the experiment at the start of first lactation and were studied during consecutive lactations (average number of lactations 2, minimum 1, maximum 3). They remained on the same dietary treatment throughout. Body condition was scored to the nearest half unit on the Danish scale (see Kristensen (1986); derived from the Lowman et al. (1976) system) from 1 to 5 on days: 2, 14, 28, 42, 56, 84, 112, 168, 224 after calving. Additionally, condition score was recorded on the day of drying off the cow, 35, 21, and 7 days before expected calving and finally on the day of calving. All condition scores were made by the trained personal on the research farm, where the same person made 92% of the scores. The temporal patterns in condition score were modelled as consisting of two underlying processes, one related to days from calving, referred to as lactation only, the other to days from (subsequent) conception, referred to as pregnancy. Both processes were assumed to be exponential functions of time. Each process was modelled separately using exponential functions, i.e. one model for lactation only and one for pregnancy, and then a combined model for both lactation only and pregnancy was fitted. The data set contained 467 lactation periods and 378 pregnancy periods. The temporal patterns in condition score of cows kept under stable and sufficient nutritional conditions were successfully described using a two component non-linear function. First lactation cows had shallower curves, they had greater condition scores at the nadir

  12. Preliminary calibration of the ACP safeguards neutron counter

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.

    2007-10-01

    The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.

  13. Calibration of a stochastic health evolution model using NHIS data

    NASA Astrophysics Data System (ADS)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  14. Multi-q pattern classification of polarization curves

    NASA Astrophysics Data System (ADS)

    Fabbri, Ricardo; Bastos, Ivan N.; Neto, Francisco D. Moura; Lopes, Francisco J. P.; Gonçalves, Wesley N.; Bruno, Odemir M.

    2014-02-01

    Several experimental measurements are expressed in the form of one-dimensional profiles, for which there is a scarcity of methodologies able to classify the pertinence of a given result to a specific group. The polarization curves that evaluate the corrosion kinetics of electrodes in corrosive media are applications where the behavior is chiefly analyzed from profiles. Polarization curves are indeed a classic method to determine the global kinetics of metallic electrodes, but the strong nonlinearity from different metals and alloys can overlap and the discrimination becomes a challenging problem. Moreover, even finding a typical curve from replicated tests requires subjective judgment. In this paper, we used the so-called multi-q approach based on the Tsallis statistics in a classification engine to separate the multiple polarization curve profiles of two stainless steels. We collected 48 experimental polarization curves in an aqueous chloride medium of two stainless steel types, with different resistance against localized corrosion. Multi-q pattern analysis was then carried out on a wide potential range, from cathodic up to anodic regions. An excellent classification rate was obtained, at a success rate of 90%, 80%, and 83% for low (cathodic), high (anodic), and both potential ranges, respectively, using only 2% of the original profile data. These results show the potential of the proposed approach towards efficient, robust, systematic and automatic classification of highly nonlinear profile curves.

  15. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic

  16. Converting HAZUS capacity curves to seismic hazard-compatible building fragility functions: effect of hysteretic models

    USGS Publications Warehouse

    Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem

    2008-01-01

    A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.

  17. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    NASA Astrophysics Data System (ADS)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  18. A Bionic Polarization Navigation Sensor and Its Calibration Method.

    PubMed

    Zhao, Huijie; Xu, Wujian

    2016-08-03

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects' polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor's signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation.

  19. A Bionic Polarization Navigation Sensor and Its Calibration Method

    PubMed Central

    Zhao, Huijie; Xu, Wujian

    2016-01-01

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects’ polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor’s signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation. PMID:27527171

  20. 40 CFR 90.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... the form of the following equation (1) or (2). Include zero as a data point. Compensation for known...

  1. Nonlinear Acoustical Assessment of Precipitate Nucleation

    NASA Technical Reports Server (NTRS)

    Cantrell, John H.; Yost, William T.

    2004-01-01

    The purpose of the present work is to show that measurements of the acoustic nonlinearity parameter in heat treatable alloys as a function of heat treatment time can provide quantitative information about the kinetics of precipitate nucleation and growth in such alloys. Generally, information on the kinetics of phase transformations is obtained from time-sequenced electron microscopical examination and differential scanning microcalorimetry. The present nonlinear acoustical assessment of precipitation kinetics is based on the development of a multiparameter analytical model of the effects on the nonlinearity parameter of precipitate nucleation and growth in the alloy system. A nonlinear curve fit of the model equation to the experimental data is then used to extract the kinetic parameters related to the nucleation and growth of the targeted precipitate. The analytical model and curve fit is applied to the assessment of S' precipitation in aluminum alloy 2024 during artificial aging from the T4 to the T6 temper.

  2. The nonlinear interaction of Tollmien-Schlichting waves and Taylor-Goertler vortices in curved channel flows

    NASA Technical Reports Server (NTRS)

    Hall, P.; Smith, F. T.

    1988-01-01

    The development of Tollmien-Schlichting waves (TSWs) and Taylor-Goertler vortices (TGVs) in fully developed viscous curved-channel flows is investigated analytically, with a focus on their nonlinear interactions. Two types of interactions are identified, depending on the amplitude of the initial disturbances. In the low-amplitude type, two TSWs and one TGV interact, and the scaled amplitudes go to infinity on a finite time scale; in the higher-amplitude type, which can also occur in a straight channel, the same singularity occurs if the angle between the TSW wavefront and the TGV is greater than 41.6 deg, but the breakdown is exponential and takes an infinite time if the angle is smaller. The implications of these findings for external flow problems such as the design of laminar-flow wings are indicated. It is concluded that longitudinal vortices like those observed in the initial stages of the transition to turbulence can be produced unless the present interaction mechanism is destroyed by boundary-layer growth.

  3. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  4. On the nonlinear interaction of Goertler vortices and Tollmien-Schlichting waves in curved channel flows at finite Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Daudpota, Q. Isa; Zang, Thomas A.; Hall, Philip

    1988-01-01

    The flow in a two-dimensional curved channel driven by an azimuthal pressure gradient can become linearly unstable due to axisymmetric perturbations and/or nonaxisymmetric perturbations depending on the curvature of the channel and the Reynolds number. For a particular small value of curvature, the critical neighborhood of this curvature value and critical Reynolds number, nonlinear interactions occur between these perturbations. The Stuart-Watson approach is used to derive two coupled Landau equations for the amplitudes of these perturbations. The stability of the various possible states of these perturbations is shown through bifurcation diagrams. Emphasis is given to those cases which have relevance to external flows.

  5. On the nonlinear interaction of Gortler vortices and Tollmien-Schlichting waves in curved channel flows at finite Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Daudpota, Q. Isa; Hall, Philip; Zang, Thomas A.

    1987-01-01

    The flow in a two-dimensional curved channel driven by an azimuthal pressure gradient can become linearly unstable due to axisymmetric perturbations and/or nonaxisymmetric perturbations depending on the curvature of the channel and the Reynolds number. For a particular small value of curvature, the critical neighborhood of this curvature value and critical Reynolds number, nonlinear interactions occur between these perturbations. The Stuart-Watson approach is used to derive two coupled Landau equations for the amplitudes of these perturbations. The stability of the various possible states of these perturbations is shown through bifurcation diagrams. Emphasis is given to those cases which have relevance to external flows.

  6. Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.

  7. PV Degradation Curves: Non-Linearities and Failure Modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, Dirk C.; Silverman, Timothy J.; Sekulic, Bill

    Photovoltaic (PV) reliability and durability have seen increased interest in recent years. Historically, and as a preliminarily reasonable approximation, linear degradation rates have been used to quantify long-term module and system performance. The underlying assumption of linearity can be violated at the beginning of the life, as has been well documented, especially for thin-film technology. Additionally, non-linearities in the wear-out phase can have significant economic impact and appear to be linked to different failure modes. In addition, associating specific degradation and failure modes with specific time series behavior will aid in duplicating these degradation modes in accelerated tests and, eventually,more » in service life prediction. In this paper, we discuss different degradation modes and how some of these may cause approximately linear degradation within the measurement uncertainty (e.g., modules that were mainly affected by encapsulant discoloration) while other degradation modes lead to distinctly non-linear degradation (e.g., hot spots caused by cracked cells or solder bond failures and corrosion). The various behaviors are summarized with the goal of aiding in predictions of what may be seen in other systems.« less

  8. Calibration of the forward-scattering spectrometer probe - Modeling scattering from a multimode laser beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  9. Calibration of the Forward-scattering Spectrometer Probe: Modeling Scattering from a Multimode Laser Beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  10. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    NASA Astrophysics Data System (ADS)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  11. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values.

    PubMed

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-05

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD=0.12], 0.67-23.19 [LOD=0.13] and 0.73-25.12 [LOD=0.15] μgmL(-1) for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values

    NASA Astrophysics Data System (ADS)

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-01

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.

  13. Calibration-free optical chemical sensors

    DOEpatents

    DeGrandpre, Michael D.

    2006-04-11

    An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.

  14. A counting-weighted calibration method for a field-programmable-gate-array-based time-to-digital converter

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Ho

    2017-05-01

    In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [-0.54, 0.24] and [-0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.

  15. Nonlinear acoustic techniques for landmine detection.

    PubMed

    Korman, Murray S; Sabatier, James M

    2004-12-01

    Measurements of the top surface vibration of a buried (inert) VS 2.2 anti-tank plastic landmine reveal significant resonances in the frequency range between 80 and 650 Hz. Resonances from measurements of the normal component of the acoustically induced soil surface particle velocity (due to sufficient acoustic-to-seismic coupling) have been used in detection schemes. Since the interface between the top plate and the soil responds nonlinearly to pressure fluctuations, characteristics of landmines, the soil, and the interface are rich in nonlinear physics and allow for a method of buried landmine detection not previously exploited. Tuning curve experiments (revealing "softening" and a back-bone curve linear in particle velocity amplitude versus frequency) help characterize the nonlinear resonant behavior of the soil-landmine oscillator. The results appear to exhibit the characteristics of nonlinear mesoscopic elastic behavior, which is explored. When two primary waves f1 and f2 drive the soil over the mine near resonance, a rich spectrum of nonlinearly generated tones is measured with a geophone on the surface over the buried landmine in agreement with Donskoy [SPIE Proc. 3392, 221-217 (1998); 3710, 239-246 (1999)]. In profiling, particular nonlinear tonals can improve the contrast ratio compared to using either primary tone in the spectrum.

  16. Localized normalization for improved calibration curves of manganese and zinc in laser-induced plasma spectroscopy

    NASA Astrophysics Data System (ADS)

    Sabri, Nursalwanie Mohd; Haider, Zuhaib; Tufail, Kashif; Imran, Muhammad; Ali, Jalil

    2017-03-01

    Laser-induced plasma spectroscopy is performed to determine the elemental compositions of manganese and zinc in potassium bromide (KBr) matrix. This work has utilized Q-switched Nd:YAG laser installed in LIBS2500plus system at fundamental wavelength. The pelletized sample were ablated in air with maximum laser energy of 650 mJ for different gate delays ranging from 0-18 µs. The spectra of samples are obtained for five different compositions containing preferred spectral lines. The intensity of spectral line is observed at its maximum at a gate-delay 0.83 µs and subsequently decayed exponentially with the increasing of gate delay. Maximum signal-to-background ratio of Mn and Zn were found at gate delays of 7.92 and 7.50 µs, respectively. Initial calibration curves show bad data fitting, whereas the locally normalized intensity for both spectral lines shows enhancement since it is more linearly regressed. This study will give a better understanding in studying the plasma emission and the spectra analysis. At the request of all authors of the paper, and with the agreement of the Proceedings Editor, an updated version of this article was published on 24 May 2017.

  17. Synthesis of nonlinear frequency responses with experimentally extracted nonlinear modes

    NASA Astrophysics Data System (ADS)

    Peter, Simon; Scheel, Maren; Krack, Malte; Leine, Remco I.

    2018-02-01

    Determining frequency response curves is a common task in the vibration analysis of nonlinear systems. Measuring nonlinear frequency responses is often challenging and time consuming due to, e.g., coexisting stable or unstable vibration responses and structure-exciter-interaction. The aim of the current paper is to develop a method for the synthesis of nonlinear frequency responses near an isolated resonance, based on data that can be easily and automatically obtained experimentally. The proposed purely experimental approach relies on (a) a standard linear modal analysis carried out at low vibration levels and (b) a phase-controlled tracking of the backbone curve of the considered forced resonance. From (b), the natural frequency and vibrational deflection shape are directly obtained as a function of the vibration level. Moreover, a damping measure can be extracted by power considerations or from the linear modal analysis. In accordance with the single nonlinear mode assumption, the near-resonant frequency response can then be synthesized using this data. The method is applied to a benchmark structure consisting of a cantilevered beam attached to a leaf spring undergoing large deflections. The results are compared with direct measurements of the frequency response. The proposed approach is fast, robust and provides a good estimate for the frequency response. It is also found that direct frequency response measurement is less robust due to bifurcations and using a sine sweep excitation with a conventional force controller leads to underestimation of maximum vibration response.

  18. Solid laboratory calibration of a nonimaging spectroradiometer.

    PubMed

    Schaepman, M E; Dangel, S

    2000-07-20

    Field-based nonimaging spectroradiometers are often used in vicarious calibration experiments for airborne or spaceborne imaging spectrometers. The calibration uncertainties associated with these ground measurements contribute substantially to the overall modeling error in radiance- or reflectance-based vicarious calibration experiments. Because of limitations in the radiometric stability of compact field spectroradiometers, vicarious calibration experiments are based primarily on reflectance measurements rather than on radiance measurements. To characterize the overall uncertainty of radiance-based approaches and assess the sources of uncertainty, we carried out a full laboratory calibration. This laboratory calibration of a nonimaging spectroradiometer is based on a measurement plan targeted at achieving a calibration. The individual calibration steps include characterization of the signal-to-noise ratio, the noise equivalent signal, the dark current, the wavelength calibration, the spectral sampling interval, the nonlinearity, directional and positional effects, the spectral scattering, the field of view, the polarization, the size-of-source effects, and the temperature dependence of a particular instrument. The traceability of the radiance calibration is established to a secondary National Institute of Standards and Technology calibration standard by use of a 95% confidence interval and results in an uncertainty of less than ?7.1% for all spectroradiometer bands.

  19. Nonlinearly stacked low noise turbofan stator

    NASA Technical Reports Server (NTRS)

    Schuster, William B. (Inventor); Nolcheff, Nick A. (Inventor); Gunaraj, John A. (Inventor); Kontos, Karen B. (Inventor); Weir, Donald S. (Inventor)

    2009-01-01

    A nonlinearly stacked low noise turbofan stator vane having a characteristic curve that is characterized by a nonlinear sweep and a nonlinear lean is provided. The stator is in an axial fan or compressor turbomachinery stage that is comprised of a collection of vanes whose highly three-dimensional shape is selected to reduce rotor-stator and rotor-strut interaction noise while maintaining the aerodynamic and mechanical performance of the vane. The nonlinearly stacked low noise turbofan stator vane reduces noise associated with the fan stage of turbomachinery to improve environmental compatibility.

  20. Determination of Tafel Constants in Nonlinear Polarization Curves.

    DTIC Science & Technology

    1987-12-01

    resulted in difficulty in determining the Tafel constants from such plots. A FORTRAN based program involving numerical differentiation techniques was...MASTER OF SCIENCE IN MECHANICAL ENGINEERING from the NAVAL POSTGRADUATE SCHOOL December 1987 Auho:Th as Edr L~oughlin Approved by: J erkins hesis Advisor...Inthony J.f Healey, Chai man, Departm o Mhnical E gineering ’ Gordon E. Schacher Dean of Science and Engineering 21 ABSTRACT The presence of non-linear

  1. Comment on "Radiocarbon Calibration Curve Spanning 0 to 50,000 Years B.P. Based on Paired 230Th/234U/238U and 14C Dates on Pristine Corals" by R.G. Fairbanks, R. A. Mortlock, T.-C. Chiu, L. Cao, A. Kaplan, T. P. Guilderson, T. W. Fairbanks, A. L. Bloom, P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimer, P J; Baillie, M L; Bard, E

    2005-10-02

    Radiocarbon calibration curves are essential for converting radiocarbon dated chronologies to the calendar timescale. Prior to the 1980's numerous differently derived calibration curves based on radiocarbon ages of known age material were in use, resulting in ''apples and oranges'' comparisons between various records (Klein et al., 1982), further complicated by until then unappreciated inter-laboratory variations (International Study Group, 1982). The solution was to produce an internationally-agreed calibration curve based on carefully screened data with updates at 4-6 year intervals (Klein et al., 1982; Stuiver and Reimer, 1986; Stuiver and Reimer, 1993; Stuiver et al., 1998). The IntCal working group hasmore » continued this tradition with the active participation of researchers who produced the records that were considered for incorporation into the current, internationally-ratified calibration curves, IntCal04, SHCal04, and Marine04, for Northern Hemisphere terrestrial, Southern Hemisphere terrestrial, and marine samples, respectively (Reimer et al., 2004; Hughen et al., 2004; McCormac et al., 2004). Fairbanks et al. (2005), accompanied by a more technical paper, Chiu et al. (2005), and an introductory comment, Adkins (2005), recently published a ''calibration curve spanning 0-50,000 years''. Fairbanks et al. (2005) and Chiu et al. (2005) have made a significant contribution to the database on which the IntCal04 and Marine04 calibration curves are based. These authors have now taken the further step to derive their own radiocarbon calibration extending to 50,000 cal BP, which they claim is superior to that generated by the IntCal working group. In their papers, these authors are strongly critical of the IntCal calibration efforts for what they claim to be inadequate screening and sample pretreatment methods. While these criticisms may ultimately be helpful in identifying a better set of protocols, we feel that there are also several erroneous and misleading

  2. GIADA: extended calibration activities before the comet encounter

    NASA Astrophysics Data System (ADS)

    Accolla, Mario; Sordini, Roberto; Della Corte, Vincenzo; Ferrari, Marco; Rotundi, Alessandra

    2014-05-01

    The Grain Impact Analyzer and Dust Accumulator - GIADA - is one of the payloads on-board Rosetta Orbiter. Its three detection sub-systems are able to measure the speed, the momentum, the mass, the optical cross section of single cometary grains and the dust flux ejected by the periodic comet 67P Churyumov-Gerasimenko. During the Hibernation phase of the Rosetta mission, we have performed a dedicated extended calibration activity on the GIADA Proto Flight Model (accommodated in a clean room in our laboratory) involving two of three sub-systems constituting GIADA, i.e. the Grain Detection System (GDS) and the Impact Sensor (IS). Our aim is to carry out a new set of response curves for these two subsystems and to correlate them with the calibration curves obtained in 2002 for the GIADA payload onboard the Rosetta spacecraft, in order to improve the interpretation of the forthcoming scientific data. For the extended calibration we have dropped or shot into GIADA PFM a statistically relevant number of grains (i.e. about 1 hundred), acting as cometary dust analogues. We have studied the response of the GDS and IS as a function of grain composition, size and velocity. Different terrestrial materials were selected as cometary analogues according to the more recent knowledge gained through the analyses of Interplanetary Dust Particles and cometary samples returned from comet 81P/Wild 2 (Stardust mission). Therefore, for each material, we have produced grains with sizes ranging from 20-500 μm in diameter, that were characterized by FESEM and micro IR spectroscopy. Therefore, the grains were shot into GIADA PFM with speed ranging between 1 and 100 ms-1. Indeed, according to the estimation reported in Fink & Rubin (2012), this range is representative of the dust particle velocity expected at the comet scenario and lies within the GIADA velocity sensitivity (i.e. 1-100 ms-1 for GDSand 1-300 ms-1for GDS+IS 1-300 ms-1). The response curves obtained using the data collected

  3. Fitting Richards' curve to data of diverse origins

    USGS Publications Warehouse

    Johnson, D.H.; Sargeant, A.B.; Allen, S.H.

    1975-01-01

    Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.

  4. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  5. Can hydraulic-modelled rating curves reduce uncertainty in high flow data?

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Lam, Norris; Lyon, Steve W.

    2017-04-01

    Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show

  6. Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.

    2018-01-01

    This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.

  7. A dose-response curve for biodosimetry from a 6 MV electron linear accelerator

    PubMed Central

    Lemos-Pinto, M.M.P.; Cadena, M.; Santos, N.; Fernandes, T.S.; Borges, E.; Amaral, A.

    2015-01-01

    Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates. PMID:26445334

  8. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  9. Traction curves for the decohesion of covalent crystals

    NASA Astrophysics Data System (ADS)

    Enrique, Raúl A.; Van der Ven, Anton

    2017-01-01

    We study, by first principles, the energy versus separation curves for the cleavage of a family of covalent crystals with the diamond and zincblende structure. We find that there is universality in the curves for different materials which is chemistry independent but specific to the geometry of the particular cleavage plane. Since these curves do not strictly follow the universal binding energy relationship (UBER), we present a derivation of an extension to this relationship that includes non-linear force terms. This extended form of UBER allows for a flexible and practical mathematical description of decohesion curves that can be applied to the quantification of cohesive zone models.

  10. Calibration improvements to electronically scanned pressure systems and preliminary statistical assessment

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.

    1996-01-01

    Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.

  11. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2016-04-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  12. Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation

    NASA Astrophysics Data System (ADS)

    Bueno, Diana R.; Montano, L.

    2017-04-01

    Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.

  13. Perspectives on Geometrodynamics: The Nonlinear Dynamics of Curved Spacetime

    NASA Astrophysics Data System (ADS)

    Thorne, Kip S.

    2012-03-01

    In the 1950s John Archibald Wheeler exhorted his students and colleagues to explore ``Geometrodynamics,'' i.e. the dynamical behavior of curved spacetime, as predicted by Einstein's general relativity theory. Unfortunately, the research tools of that era were inadequate for the task. This has changed over the past ten years and will change further in the coming decade, thanks to two new sets of tools - numerical relativity, and gravitational wave observations, coupled to theory. In this lecture, I will review the progress and prospects for geometrodynamics, focusing especially on: 1. Geometrodynamics near singularities, 2. Geometrodynamics triggered by colliding black holes, 3. Geometrodynamics triggered by black-string instabilities in four space dimensions, and 4. Preparations for observing the dynamics of curved spacetime with interferometric gravitational wave detectors: LIGO and its international partners.

  14. TBA-like integral equations from quantized mirror curves

    NASA Astrophysics Data System (ADS)

    Okuyama, Kazumi; Zakany, Szabolcs

    2016-03-01

    Quantizing the mirror curve of certain toric Calabi-Yau (CY) three-folds leads to a family of trace class operators. The resolvent function of these operators is known to encode topological data of the CY. In this paper, we show that in certain cases, this resolvent function satisfies a system of non-linear integral equations whose structure is very similar to the Thermodynamic Bethe Ansatz (TBA) systems. This can be used to compute spectral traces, both exactly and as a semiclassical expansion. As a main example, we consider the system related to the quantized mirror curve of local P2. According to a recent proposal, the traces of this operator are determined by the refined BPS indices of the underlying CY. We use our non-linear integral equations to test that proposal.

  15. Nonlinear flutter analysis of composite panels

    NASA Astrophysics Data System (ADS)

    An, Xiaomin; Wang, Yan

    2018-05-01

    Nonlinear panel flutter is an interesting subject of fluid-structure interaction. In this paper, nonlinear flutter characteristics of curved composite panels are studied in very low supersonic flow. The composite panel with geometric nonlinearity is modeled by a nonlinear finite element method; and the responses are computed by the nonlinear Newmark algorithm. An unsteady aerodynamic solver, which contains a flux splitting scheme and dual time marching technology, is employed in calculating the unsteady pressure of the motion of the panel. Based on a half-step staggered coupled solution, the aeroelastic responses of two composite panels with different radius of R = 5 and R = 2.5 are computed and compared with each other at different dynamic pressure for Ma = 1.05. The nonlinear flutter characteristics comprising limited cycle oscillations and chaos are analyzed and discussed.

  16. X-ray Diffraction Crystal Calibration and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael J. Haugh; Richard Stewart; Nathan Kugland

    2009-06-05

    National Security Technologies’ X-ray Laboratory is comprised of a multi-anode Manson type source and a Henke type source that incorporates a dual goniometer and XYZ translation stage. The first goniometer is used to isolate a particular spectral band. The Manson operates up to 10 kV and the Henke up to 20 kV. The Henke rotation stages and translation stages are automated. Procedures have been developed to characterize and calibrate various NIF diagnostics and their components. The diagnostics include X-ray cameras, gated imagers, streak cameras, and other X-ray imaging systems. Components that have been analyzed include filters, filter arrays, grazing incidencemore » mirrors, and various crystals, both flat and curved. Recent efforts on the Henke system are aimed at characterizing and calibrating imaging crystals and curved crystals used as the major component of an X-ray spectrometer. The presentation will concentrate on these results. The work has been done at energies ranging from 3 keV to 16 keV. The major goal was to evaluate the performance quality of the crystal for its intended application. For the imaging crystals we measured the laser beam reflection offset from the X-ray beam and the reflectivity curves. For the curved spectrometer crystal, which was a natural crystal, resolving power was critical. It was first necessary to find sources of crystals that had sufficiently narrow reflectivity curves. It was then necessary to determine which crystals retained their resolving power after being thinned and glued to a curved substrate.« less

  17. Bilinear modelling of cellulosic orthotropic nonlinear materials

    Treesearch

    E.P. Saliklis; T. J. Urbanik; B. Tokyay

    2003-01-01

    The proposed method of modelling orthotropic solids that have a nonlinear constitutive material relationship affords several advantages. The first advantage is the application of a simple bilinear stress-strain curve to represent the material response on two orthogonal axes as well as in shear, even for markedly nonlinear materials. The second advantage is that this...

  18. Non-linear behavior of fiber composite laminates

    NASA Technical Reports Server (NTRS)

    Hashin, Z.; Bagchi, D.; Rosen, B. W.

    1974-01-01

    The non-linear behavior of fiber composite laminates which results from lamina non-linear characteristics was examined. The analysis uses a Ramberg-Osgood representation of the lamina transverse and shear stress strain curves in conjunction with deformation theory to describe the resultant laminate non-linear behavior. A laminate having an arbitrary number of oriented layers and subjected to a general state of membrane stress was treated. Parametric results and comparison with experimental data and prior theoretical results are presented.

  19. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  20. Analytic reflected light curves for exoplanets

    NASA Astrophysics Data System (ADS)

    Haggard, Hal M.; Cowan, Nicolas B.

    2018-07-01

    The disc-integrated reflected brightness of an exoplanet changes as a function of time due to orbital and rotational motions coupled with an inhomogeneous albedo map. We have previously derived analytic reflected light curves for spherical harmonic albedo maps in the special case of a synchronously rotating planet on an edge-on orbit (Cowan, Fuentes & Haggard). In this paper, we present analytic reflected light curves for the general case of a planet on an inclined orbit, with arbitrary spin period and non-zero obliquity. We do so for two different albedo basis maps: bright points (δ-maps), and spherical harmonics (Y_ l^m-maps). In particular, we use Wigner D-matrices to express an harmonic light curve for an arbitrary viewing geometry as a non-linear combination of harmonic light curves for the simpler edge-on, synchronously rotating geometry. These solutions will enable future exploration of the degeneracies and information content of reflected light curves, as well as fast calculation of light curves for mapping exoplanets based on time-resolved photometry. To these ends, we make available Exoplanet Analytic Reflected Lightcurves, a simple open-source code that allows rapid computation of reflected light curves.

  1. Radiometric calibration of the vacuum-ultraviolet spectrograph SUMER on the SOHO spacecraft with the B detector.

    PubMed

    Schühle, U; Curdt, W; Hollandt, J; Feldman, U; Lemaire, P; Wilhelm, K

    2000-01-20

    The Solar Ultraviolet Measurement of Emitted Radiation (SUMER) vacuum-ultraviolet spectrograph was calibrated in the laboratory before the integration of the instrument on the Solar and Heliospheric Observatory (SOHO) spacecraft in 1995. During the scientific operation of the SOHO it has been possible to track the radiometric calibration of the SUMER spectrograph since March 1996 by a strategy that employs various methods to update the calibration status and improve the coverage of the spectral calibration curve. The results for the A Detector were published previously [Appl. Opt. 36, 6416 (1997)]. During three years of operation in space, the B detector was used for two and one-half years. We describe the characteristics of the B detector and present results of the tracking and refinement of the spectral calibration curves with it. Observations of the spectra of the stars alpha and rho Leonis permit an extrapolation of the calibration curves in the range from 125 to 149.0 nm. Using a solar coronal spectrum observed above the solar disk, we can extrapolate the calibration curves by measuring emission line pairs with well-known intensity ratios. The sensitivity ratio of the two photocathode areas can be obtained by registration of many emission lines in the entire spectral range on both KBr-coated and bare parts of the detector's active surface. The results are found to be consistent with the published calibration performed in the laboratory in the wavelength range from 53 to 124 nm. We can extrapolate the calibration outside this range to 147 nm with a relative uncertainty of ?30% (1varsigma) for wavelengths longer than 125 nm and to 46.5 nm with 50% uncertainty for the short-wavelength range below 53 nm.

  2. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  3. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  4. Holographic Entanglement Entropy, SUSY & Calibrations

    NASA Astrophysics Data System (ADS)

    Colgáin, Eoin Ó.

    2018-01-01

    Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.

  5. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  6. Decomposition of mineral absorption bands using nonlinear least squares curve fitting: Application to Martian meteorites and CRISM data

    NASA Astrophysics Data System (ADS)

    Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.

    2011-04-01

    This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.

  7. The estimation of branching curves in the presence of subject-specific random effects.

    PubMed

    Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng

    2014-12-20

    Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.

  8. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  9. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  10. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  11. Research on a high-precision calibration method for tunable lasers

    NASA Astrophysics Data System (ADS)

    Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai

    2018-03-01

    Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.

  12. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Tests)

    NASA Technical Reports Server (NTRS)

    Pastor-Barsi, Christine; Allen, Arrington E.

    2013-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel (IRT) was completed in 2012 following the major modifications to the facility that included replacement of the refrigeration plant and heat exchanger. The calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT.

  13. Exploring Alternative Characteristic Curve Approaches to Linking Parameter Estimates from the Generalized Partial Credit Model.

    ERIC Educational Resources Information Center

    Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill

    Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…

  14. Objective calibration of regional climate models

    NASA Astrophysics Data System (ADS)

    Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.

    2012-12-01

    Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented

  15. Stability of Nonlinear Swarms on Flat and Curved Surfaces

    DTIC Science & Technology

    numerical experiments have shown that the system either converges to a rotating circular limit cycle with a fixed center of mass, or the agents clump ...Swarming is a near-universal phenomenon in nature. Many mathematical models of swarms exist , both to model natural processes and to control robotic...agents. We study a swarm of agents with spring-like at-traction and nonlinear self-propulsion. Swarms of this type have been studied numerically, but

  16. Linearization of calibration curves by aerosol carrier effect of CCl 4 vapor in electrothermal vaporization inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Kántor, Tibor; de Loos-Vollebregt, Margaretha T. C.

    2005-03-01

    Carbon tetrachloride vapor as gaseous phase modifier in a graphite furnace electrothermal vaporizer (GFETV) converts heavy volatile analyte forms to volatile and medium volatile chlorides and produces aerosol carrier effect, the latter being a less generally recognized benefit. However, the possible increase of polyatomic interferences in inductively coupled plasma mass spectrometry (GFETV-ICP-MS) by chlorine and carbon containing species due to CCl 4 vapor introduction has been discouraging with the use of low resolution, quadrupole type MS equipment. Being aware of this possible handicap, it was aimed at to investigate the feasibility of the use of this halogenating agent in ICP-MS with regard of possible hazards to the instrument, and also to explore the advantages under these specific conditions. With sample gas flow (inner gas flow) rate not higher than 900 ml min -1 Ar in the torch and 3 ml min -1 CCl 4 vapor flow rate in the furnace, the long-term stability of the instrument was ensured and the following benefits by the halocarbon were observed. The non-linearity error (defined in the text) of the calibration curves (signal versus mass functions) with matrix-free solution standards was 30-70% without, and 1-5% with CCl 4 vapor introduction, respectively, at 1 ng mass of Cu, Fe, Mn and Pb analytes. The sensitivity for these elements increased by 2-4-fold with chlorination, while the relative standard deviation (RSD) was essentially the same (2-5%) for the two cases in comparison. A vaporization temperature of 2650 °C was required for Cr in Ar atmosphere, while 2200 °C was sufficient in Ar + CCl 4 atmosphere to attain complete vaporization. Improvements in linear response and sensitivity were the highest for this least volatile element. The pyrolytic graphite layer inside the graphite tube was protected by the halocarbon, and tube life time was further increased by using traces of hydrocarbon vapor in the external sheath gas of the graphite furnace. Details

  17. Nuclear Gauge Calibration and Testing Guidelines for Hawaii

    DOT National Transportation Integrated Search

    2006-12-15

    Project proposal brief: AASHTO and ASTM nuclear gauge testing procedures can lead to misleading density and moisture readings for certain Hawaiian soils. Calibration curves need to be established for these unique materials, along with clear standard ...

  18. General Nonlinear Ferroelectric Model v. Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Wen; Robbins, Josh

    2017-03-14

    The purpose of this software is to function as a generalized ferroelectric material model. The material model is designed to work with existing finite element packages by providing updated information on material properties that are nonlinear and dependent on loading history. The two major nonlinear phenomena this model captures are domain-switching and phase transformation. The software itself does not contain potentially sensitive material information and instead provides a framework for different physical phenomena observed within ferroelectric materials. The model is calibrated to a specific ferroelectric material through input parameters provided by the user.

  19. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  20. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  1. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  2. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  3. Calibrations between the variables of microbial TTI response and ground pork qualities.

    PubMed

    Kim, Eunji; Choi, Dong Yeol; Kim, Hyun Chul; Kim, Keehyuk; Lee, Seung Ju

    2013-10-01

    A time-temperature indicator (TTI) based on a lactic acid bacterium, Weissella cibaria CIFP009, was applied to ground pork packaging. Calibration curves between TTI response and pork qualities were obtained from storage tests at 2°C, 10°C, and 13°C. The curves of the TTI vs. total cell number at different temperatures coincided to the greatest extent, indicating the highest representativeness of calibration, by showing the least coefficient of variance (CV=11%) of the quality variables at a given TTI response (titratable acidity) on the curves, followed by pH (23%), volatile basic nitrogen (VBN) (25%), and thiobarbituric acid-reactive substances (TBARS) (47%). Similarity of Arrhenius activation energy (Ea) could also reflect the representativeness of calibration. The total cell number (104.9 kJ/mol) was found to be the most similar to that of the TTI response (106.2 kJ/mol), followed by pH (113.6 kJ/mol), VBN (77.4 kJ/mol), and TBARS (55.0 kJ/mol). Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Use of armored RNA as a standard to construct a calibration curve for real-time RT-PCR.

    PubMed

    Donia, D; Divizia, M; Pana', A

    2005-06-01

    Armored Enterovirus RNA was used to standardize a real-time reverse transcription (RT)-PCR for environmental testing. Armored technology is a system to produce a robust and stable RNA standard, trapped into phage proteins, to be used as internal control. The Armored Enterovirus RNA protected sequence includes 263 bp of highly conserved sequences in 5' UTR region. During these tests, Armored RNA has been used to produce a calibration curve, comparing three different fluorogenic chemistry: TaqMan system, Syber Green I and Lux-primers. The effective evaluation of three amplifying commercial reagent kits, in use to carry out real-time RT-PCR, and several extraction procedures of protected viral RNA have been carried out. The highest Armored RNA recovery was obtained by heat treatment while chemical extraction may decrease the quantity of RNA. The best sensitivity and specificity was obtained using the Syber Green I technique since it is a reproducible test, easy to use and the cheapest one. TaqMan and Lux-primer assays provide good RT-PCR efficiency in relationship to the several extraction methods used, since labelled probe or primer request in these chemistry strategies, increases the cost of testing.

  5. State-variable analysis of non-linear circuits with a desk computer

    NASA Technical Reports Server (NTRS)

    Cohen, E.

    1981-01-01

    State variable analysis was used to analyze the transient performance of non-linear circuits on a desk top computer. The non-linearities considered were not restricted to any circuit element. All that is required for analysis is the relationship defining each non-linearity be known in terms of points on a curve.

  6. On the absolute calibration of SO2 cameras

    USGS Publications Warehouse

    Lübcke, Peter; Bobrowski, Nicole; Illing, Sebastian; Kern, Christoph; Alvarez Nieves, Jose Manuel; Vogel, Leif; Zielcke, Johannes; Delgados Granados, Hugo; Platt, Ulrich

    2013-01-01

    This work investigates the uncertainty of results gained through the two commonly used, but quite different, calibration methods (DOAS and calibration cells). Measurements with three different instruments, an SO2 camera, a NFOVDOAS system and an Imaging DOAS (I-DOAS), are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The respective results are compared with measurements from an I-DOAS to verify the calibration curve over the spatial extent of the image. The results show that calibration cells, while working fine in some cases, can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. The measurements presented in this work were taken at Popocatepetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 and 14.34 kg s−1 were observed.

  7. On the long-term stability of calibration standards in different matrices.

    PubMed

    Kandić, A; Vukanac, I; Djurašević, M; Novković, D; Šešlak, B; Milošević, Z

    2012-09-01

    In order to assure Quality Control in accordance with ISO/IEC 17025, it was important, from metrological point of view, to examine the long-term stability of calibration standards previously prepared. Comprehensive reconsideration on efficiency curves with respect to the ageing of calibration standards is presented in this paper. The calibration standards were re-used after a period of 5 years and analysis of the results showed discrepancies in efficiency values. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. REVISITING EVIDENCE OF CHAOS IN X-RAY LIGHT CURVES: THE CASE OF GRS 1915+105

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mannattil, Manu; Gupta, Himanshu; Chakraborty, Sagar, E-mail: mmanu@iitk.ac.in, E-mail: hiugupta@iitk.ac.in, E-mail: sagarc@iitk.ac.in

    2016-12-20

    Nonlinear time series analysis has been widely used to search for signatures of low-dimensional chaos in light curves emanating from astrophysical bodies. A particularly popular example is the microquasar GRS 1915+105, whose irregular but systematic X-ray variability has been well studied using data acquired by the Rossi X-ray Timing Explorer . With a view to building simpler models of X-ray variability, attempts have been made to classify the light curves of GRS 1915+105 as chaotic or stochastic. Contrary to some of the earlier suggestions, after careful analysis, we find no evidence for chaos or determinism in any of the GRS 1915+105 classes. Themore » dearth of long and stationary data sets representing all the different variability classes of GRS 1915+105 makes it a poor candidate for analysis using nonlinear time series techniques. We conclude that either very exhaustive data analysis with sufficiently long and stationary light curves should be performed, keeping all the pitfalls of nonlinear time series analysis in mind, or alternative schemes of classifying the light curves should be adopted. The generic limitations of the techniques that we point out in the context of GRS 1915+105 affect all similar investigations of light curves from other astrophysical sources.« less

  9. X-ray light curves of active galactic nuclei are phase incoherent

    NASA Technical Reports Server (NTRS)

    Krolik, Julian; Done, Chris; Madejski, Grzegorz

    1993-01-01

    We compute the Fourier phase spectra for the light curves of five low-luminosity active galactic nuclei observed by EXOSAT. There is no statistically significant phase coherence in any of them. This statement is equivalent, subject to a technical caveat, to a demonstration that their fluctuation statistics are Gaussian. Models in which the X-ray output is controlled wholly by a unitary process undergoing a nonlinear limit cycle are therefore ruled out, while models with either a large number of randomly excited independent oscillation modes or nonlinearly interacting spatially dependent oscillations are favored. We also demonstrate how the degree of phase coherence in light curve fluctuations influences the application of causality bounds on internal length scales.

  10. Applications of New Surrogate Global Optimization Algorithms including Efficient Synchronous and Asynchronous Parallelism for Calibration of Expensive Nonlinear Geophysical Simulation Models.

    NASA Astrophysics Data System (ADS)

    Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.

    2016-12-01

    New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective

  11. Design and calibration of a scanning tunneling microscope for large machined surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigg, D.A.; Russell, P.E.; Dow, T.A.

    During the last year the large sample STM has been designed, built and used for the observation of several different samples. Calibration of the scanner for prope dimensional interpretation of surface features has been a chief concern, as well as corrections for non-linear effects such as hysteresis during scans. Several procedures used in calibration and correction of piezoelectric scanners used in the laboratorys STMs are described.

  12. The Importance of Calibration in Clinical Psychology.

    PubMed

    Lindhiem, Oliver; Petersen, Isaac T; Mentch, Lucas K; Youngstrom, Eric A

    2018-02-01

    Accuracy has several elements, not all of which have received equal attention in the field of clinical psychology. Calibration, the degree to which a probabilistic estimate of an event reflects the true underlying probability of the event, has largely been neglected in the field of clinical psychology in favor of other components of accuracy such as discrimination (e.g., sensitivity, specificity, area under the receiver operating characteristic curve). Although it is frequently overlooked, calibration is a critical component of accuracy with particular relevance for prognostic models and risk-assessment tools. With advances in personalized medicine and the increasing use of probabilistic (0% to 100%) estimates and predictions in mental health research, the need for careful attention to calibration has become increasingly important.

  13. Radiochromic film calibration for the RQT9 quality beam

    NASA Astrophysics Data System (ADS)

    Costa, K. C.; Gomez, A. M. L.; Alonso, T. C.; Mourao, A. P.

    2017-11-01

    When ionizing radiation interacts with matter it generates energy deposition. Radiation dosimetry is important for medical applications of ionizing radiation due to the increasing demand for diagnostic radiology and radiotherapy. Different dosimetry methods are used and each one has its advantages and disadvantages. The film is a dose measurement method that records the energy deposition by the darkening of its emulsion. Radiochromic films have a little visible light sensitivity and respond better to ionizing radiation exposure. The aim of this study is to obtain the resulting calibration curve by the irradiation of radiochromic film strips, making it possible to relate the darkening of the film with the absorbed dose, in order to measure doses in experiments with X-ray beam of 120 kV, in computed tomography (CT). Film strips of GAFCHROMIC XR-QA2 were exposed according to RQT9 reference radiation, which defines an X-ray beam generated from a voltage of 120 kV. Strips were irradiated in "Laboratório de Calibração de Dosímetros do Centro de Desenvolvimento da Tecnologia Nuclear" (LCD / CDTN) at a dose range of 5-30 mGy, corresponding to the range values commonly used in CT scans. Digital images of the irradiated films were analyzed by using the ImageJ software. The darkening responses on film strips according to the doses were observed and they allowed obtaining the corresponding numeric values to the darkening for each specific dose value. From the numerical values of darkening, a calibration curve was obtained, which correlates the darkening of the film strip with dose values in mGy. The calibration curve equation is a simplified method for obtaining absorbed dose values using digital images of radiochromic films irradiated. With the calibration curve, radiochromic films may be applied on dosimetry in experiments on CT scans using X-ray beam of 120 kV, in order to improve CT acquisition image processes.

  14. A fast combination calibration of foreground and background for pipelined ADCs

    NASA Astrophysics Data System (ADS)

    Kexu, Sun; Lenian, He

    2012-06-01

    This paper describes a fast digital calibration scheme for pipelined analog-to-digital converters (ADCs). The proposed method corrects the nonlinearity caused by finite opamp gain and capacitor mismatch in multiplying digital-to-analog converters (MDACs). The considered calibration technique takes the advantages of both foreground and background calibration schemes. In this combination calibration algorithm, a novel parallel background calibration with signal-shifted correlation is proposed, and its calibration cycle is very short. The details of this technique are described in the example of a 14-bit 100 Msample/s pipelined ADC. The high convergence speed of this background calibration is achieved by three means. First, a modified 1.5-bit stage is proposed in order to allow the injection of a large pseudo-random dithering without missing code. Second, before correlating the signal, it is shifted according to the input signal so that the correlation error converges quickly. Finally, the front pipeline stages are calibrated simultaneously rather than stage by stage to reduce the calibration tracking constants. Simulation results confirm that the combination calibration has a fast startup process and a short background calibration cycle of 2 × 221 conversions.

  15. Revised landsat-5 thematic mapper radiometric calibration

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Barsi, J.A.

    2007-01-01

    Effective April 2, 2007, the radiometric calibration of Landsat-5 (L5) Thematic Mapper (TM) data that are processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) will be updated. The lifetime gain model that was implemented on May 5, 2003, for the reflective bands (1-5, 7) will be replaced by a new lifetime radiometric-calibration curve that is derived from the instrument's response to pseudoinvariant desert sites and from cross calibration with the Landsat-7 (L7) Enhanced TM Plus (ETM+). Although this calibration update applies to all archived and future L5 TM data, the principal improvements in the calibration are for the data acquired during the first eight years of the mission (1984-1991), where the changes in the instrument-gain values are as much as 15%. The radiometric scaling coefficients for bands 1 and 2 for approximately the first eight years of the mission have also been changed. Users will need to apply these new coefficients to convert the calibrated data product digital numbers to radiance. The scaling coefficients for the other bands have not changed.

  16. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  17. Flight calibration tests of a nose-boom-mounted fixed hemispherical flow-direction sensor

    NASA Technical Reports Server (NTRS)

    Armistead, K. H.; Webb, L. D.

    1973-01-01

    Flight calibrations of a fixed hemispherical flow angle-of-attack and angle-of-sideslip sensor were made from Mach numbers of 0.5 to 1.8. Maneuvers were performed by an F-104 airplane at selected altitudes to compare the measurement of flow angle of attack from the fixed hemispherical sensor with that from a standard angle-of-attack vane. The hemispherical flow-direction sensor measured differential pressure at two angle-of-attack ports and two angle-of-sideslip ports in diametrically opposed positions. Stagnation pressure was measured at a center port. The results of these tests showed that the calibration curves for the hemispherical flow-direction sensor were linear for angles of attack up to 13 deg. The overall uncertainty in determining angle of attack from these curves was plus or minus 0.35 deg or less. A Mach number position error calibration curve was also obtained for the hemispherical flow-direction sensor. The hemispherical flow-direction sensor exhibited a much larger position error than a standard uncompensated pitot-static probe.

  18. Calibration of areal surface topography measuring instruments

    NASA Astrophysics Data System (ADS)

    Seewig, J.; Eifler, M.

    2017-06-01

    The ISO standards which are related to the calibration of areal surface topography measuring instruments are the ISO 25178-6xx series which defines the relevant metrological characteristics for the calibration of different measuring principles and the ISO 25178-7xx series which defines the actual calibration procedures. As the field of areal measurement is however not yet fully standardized, there are still open questions to be addressed which are subject to current research. Based on this, selected research results of the authors in this area are presented. This includes the design and fabrication of areal material measures. For this topic, two examples are presented with the direct laser writing of a stepless material measure for the calibration of the height axis which is based on the Abbott- Curve and the manufacturing of a Siemens star for the determination of the lateral resolution limit. Based on these results, as well a new definition for the resolution criterion, the small scale fidelity, which is still under discussion, is presented. Additionally, a software solution for automated calibration procedures is outlined.

  19. Cross-Calibration between ASTER and MODIS Visible to Near-Infrared Bands for Improvement of ASTER Radiometric Calibration

    PubMed Central

    Tsuchida, Satoshi; Thome, Kurtis

    2017-01-01

    Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329

  20. Principal Curves on Riemannian Manifolds.

    PubMed

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  1. Fourier imaging of non-linear structure formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important,more » and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.« less

  2. Calibration of streamflow gauging stations at the Tenderfoot Creek Experimental Forest

    Treesearch

    Scott W. Woods

    2007-01-01

    We used tracer based methods to calibrate eleven streamflow gauging stations at the Tenderfoot Creek Experimental Forest in western Montana. At six of the stations the measured flows were consistent with the existing rating curves. At Lower and Upper Stringer Creek, Upper Sun Creek and Upper Tenderfoot Creek the published flows, based on the existing rating curves,...

  3. FTIR Calibration Methods and Issues

    NASA Astrophysics Data System (ADS)

    Perron, Gaetan

    points complex calibration algorithm, detector non-linearity, pointing errors, pointing jitters, fringe count errors, spikes and ice contamination. They will be discussed and illustrated using real data. Finally, an outlook will be given for the future missions.

  4. Germanium resistance thermometer calibration at superfluid helium temperatures

    NASA Technical Reports Server (NTRS)

    Mason, F. C.

    1985-01-01

    The rapid increase in resistance of high purity semi-conducting germanium with decreasing temperature in the superfluid helium range of temperatures makes this material highly adaptable as a very sensitive thermometer. Also, a germanium thermometer exhibits a highly reproducible resistance versus temperature characteristic curve upon cycling between liquid helium temperatures and room temperature. These two factors combine to make germanium thermometers ideally suited for measuring temperatures in many cryogenic studies at superfluid helium temperatures. One disadvantage, however, is the relatively high cost of calibrated germanium thermometers. In space helium cryogenic systems, many such thermometers are often required, leading to a high cost for calibrated thermometers. The construction of a thermometer calibration cryostat and probe which will allow for calibrating six germanium thermometers at one time, thus effecting substantial savings in the purchase of thermometers is considered.

  5. Determining the Parameters of Fractional Exponential Hereditary Kernels for Nonlinear Viscoelastic Materials

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Pavlyuk, Ya. V.; Fernati, P. V.

    2013-03-01

    The parameters of fractional-exponential hereditary kernels for nonlinear viscoelastic materials are determined. Methods for determining the parameters used in the third-order theory of viscoelasticity and in nonlinear theories based on the similarity of primary creep curves and the similarity of isochronous creep curves are analyzed. The parameters of fractional-exponential hereditary kernels are determined and tested against experimental data for microplastic, TC-8/3-250 glass-reinforced plastics, SVAM glass-reinforced plastics. The results (tables and plots) are analyzed

  6. Magnetic-sensor performance evaluated from magneto-conductance curve in magnetic tunnel junctions using in-plane or perpendicularly magnetized synthetic antiferromagnetic reference layers

    NASA Astrophysics Data System (ADS)

    Nakano, T.; Oogane, M.; Furuichi, T.; Ando, Y.

    2018-04-01

    The automotive industry requires magnetic sensors exhibiting highly linear output within a dynamic range as wide as ±1 kOe. A simple model predicts that the magneto-conductance (G-H) curve in a magnetic tunnel junction (MTJ) is perfectly linear, whereas the magneto-resistance (R-H) curve inevitably contains a finite nonlinearity. We prepared two kinds of MTJs using in-plane or perpendicularly magnetized synthetic antiferromagnetic (i-SAF or p-SAF) reference layers and investigated their sensor performance. In the MTJ with the i-SAF reference layer, the G-H curve did not necessarily show smaller nonlinearities than those of the R-H curve with different dynamic ranges. This is because the magnetizations of the i-SAF reference layer start to rotate at a magnetic field even smaller than the switching field (Hsw) measured by a magnetometer, which significantly affects the tunnel magnetoresistance (TMR) effect. In the MTJ with the p-SAF reference layer, the G-H curve showed much smaller nonlinearities than those of the R-H curve, thanks to a large Hsw value of the p-SAF reference layer. We achieved a nonlinearity of 0.08% FS (full scale) in the G-H curve with a dynamic range of ±1 kOe, satisfying our target for automotive applications. This demonstrated that a reference layer exhibiting a large Hsw value is indispensable in order to achieve a highly linear G-H curve.

  7. Prospects of second generation artificial intelligence tools in calibration of chemical sensors.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala

    2005-05-01

    Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.

  8. The Nonlinear Jaynes-Cummings Model for the Multiphoton Transition

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Jing; Lu, Jing-Bin; Zhang, Si-Qi; Liu, Ji-Ping; Li, Hong; Liang, Yu; Ma, Ji; Weng, Yi-Jiao; Zhang, Qi-Rui; Liu, Han; Zhang, Xiao-Ru; Wu, Xiang-Yao

    2018-01-01

    With the nonlinear Jaynes-Cummings model, we have studied the atom and light field quantum entanglement of multiphoton transition in nonlinear medium, and researched the effect of the transition photon number N and the nonlinear coefficient χ on the quantum entanglement degrees. We have given the quantum entanglement degrees curves with time evolution, we find when the transition photon number N increases, the entanglement degrees oscillation get faster. When the nonlinear coefficient α > 0, the entanglement degrees oscillation get quickly, the nonlinear term is disadvantage of the atom and light field entanglement, and when the nonlinear coefficient α < 0, the entanglement degrees oscillation get slow, the nonlinear term is advantage of the atom and light field entanglement. These results will have been used in the quantum communication and quantum information.

  9. NICMOS Cycles 13 and 14 Calibration Plans

    NASA Astrophysics Data System (ADS)

    Arribas, Santiago; Bergeron, Eddie; de Jong, Roeof; Malhotra, Sangeeta; Mobasher, Bahram; Noll, Keith; Schultz, Al; Wiklind, Tommy; Xu, Chun

    2005-11-01

    This document summarizes the NICMOS Calibration Plans for Cycles 13 and 14. These plans complement the SMOV3b, the Cycle 10 (interim), and the Cycles 11 and 12 (regular) calibration programs executed after the installation of the NICMOS Cooling System (NCS).. These previous programs have shown that the instrument is very stable, which has motivated a further reduction in the frequency of the monitoring programs for Cycle 13. In addition, for Cycle 14 some of these programs were slightly modified to account for 2 Gyro HST operations. The special calibrations on Cycle 13 were focussed on a follow up of the spectroscopic recalibration initiated in Cycle 12. This program led to the discovery of a possible count rate non-linearity, which has triggered a special program for Cycle 13 and a number of subsequent tests and calibrations during Cycle 14. At the time of writing this is a very active area of research. We also briefly comment on other calibrations defined to address other specific issues like: the autoreset test, the SPAR sequences tests, and the low-frequency flat residual for NIC1. The calibration programs for the 2-Gyro campaigns are not included here, since they have been described somewhere else. Further details and updates on specific programs can be found via the NICMOS web site.

  10. An update on 'dose calibrator' settings for nuclides used in nuclear medicine.

    PubMed

    Bergeron, Denis E; Cessna, Jeffrey T

    2018-06-01

    Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.

  11. Nonlinear optical imaging for sensitive detection of crystals in bulk amorphous powders.

    PubMed

    Kestur, Umesh S; Wanapun, Duangporn; Toth, Scott J; Wegiel, Lindsay A; Simpson, Garth J; Taylor, Lynne S

    2012-11-01

    The primary aim of this study was to evaluate the utility of second-order nonlinear imaging of chiral crystals (SONICC) to quantify crystallinity in drug-polymer blends, including solid dispersions. Second harmonic generation (SHG) can potentially exhibit scaling with crystallinity between linear and quadratic depending on the nature of the source, and thus, it is important to determine the response of pharmaceutical powders. Physical mixtures containing different proportions of crystalline naproxen and hydroxyl propyl methyl cellulose acetate succinate (HPMCAS) were prepared by blending and a dispersion was produced by solvent evaporation. A custom-built SONICC instrument was used to characterize the SHG intensity as a function of the crystalline drug fraction in the various samples. Powder X-ray diffraction (PXRD) and Raman spectroscopy were used as complementary methods known to exhibit linear scaling. SONICC was able to detect crystalline drug even in the presence of 99.9 wt % HPMCAS in the binary mixtures. The calibration curve revealed a linear dynamic range with a R(2) value of 0.99 spanning the range from 0.1 to 100 wt % naproxen with a root mean square error of prediction of 2.7%. Using the calibration curve, the errors in the validation samples were in the range of 5%-10%. Analysis of a 75 wt % HPMCAS-naproxen solid dispersion with SONICC revealed the presence of crystallites at an earlier time point than could be detected with PXRD and Raman spectroscopy. In addition, results from the crystallization kinetics experiment using SONICC were in good agreement with Raman spectroscopy and PXRD. In conclusion, SONICC has been found to be a sensitive technique for detecting low levels (0.1% or lower) of crystallinity, even in the presence of large quantities of a polymer. Copyright © 2012 Wiley-Liss, Inc.

  12. Auto-calibration of GF-1 WFV images using flat terrain

    NASA Astrophysics Data System (ADS)

    Zhang, Guo; Xu, Kai; Huang, Wenchao

    2017-12-01

    Four wide field view (WFV) cameras with 16-m multispectral medium-resolution and a combined swath of 800 km are onboard the Gaofen-1 (GF-1) satellite, which can increase the revisit frequency to less than 4 days and enable large-scale land monitoring. The detection and elimination of WFV camera distortions is key for subsequent applications. Due to the wide swath of WFV images, geometric calibration using either conventional methods based on the ground control field (GCF) or GCF independent methods is problematic. This is predominantly because current GCFs in China fail to cover the whole WFV image and most GCF independent methods are used for close-range photogrammetry or computer vision fields. This study proposes an auto-calibration method using flat terrain to detect nonlinear distortions of GF-1 WFV images. First, a classic geometric calibration model is built for the GF1 WFV camera, and at least two images with an overlap area that cover flat terrain are collected, then the elevation residuals between the real elevation and that calculated by forward intersection are used to solve nonlinear distortion parameters in WFV images. Experiments demonstrate that the orientation accuracy of the proposed method evaluated by GCF CPs is within 0.6 pixel, and residual errors manifest as random errors. Validation using Google Earth CPs further proves the effectiveness of auto-calibration, and the whole scene is undistorted compared to not using calibration parameters. The orientation accuracy of the proposed method and the GCF method is compared. The maximum difference is approximately 0.3 pixel, and the factors behind this discrepancy are analyzed. Generally, this method can effectively compensate for distortions in the GF-1 WFV camera.

  13. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  14. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  15. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  16. SU-C-204-02: Improved Patient-Specific Optimization of the Stopping Power Calibration for Proton Therapy Planning Using a Single Proton Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE

    2015-06-15

    Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to

  17. Symmetries for Galileons and DBI scalars on curved space

    DOE PAGES

    Goon, Garrett; Hinterbichler, Kurt; Trodden, Mark

    2011-07-08

    We introduced a general class of four-dimensional effective field theories which include curved space Galileons and DBI theories possessing nonlinear shift-like symmetries. These effective theories arise from purely gravitational actions and may prove relevant to the cosmology of both the early and late universe.

  18. Artificial Neural Network and application in calibration transfer of AOTF-based NIR spectrometer

    NASA Astrophysics Data System (ADS)

    Wang, Wenbo; Jiang, Chengzhi; Xu, Kexin; Wang, Bin

    2002-09-01

    Chemometrics is widely applied to develop models for quantitative prediction of unknown samples in Near-infrared (NIR) spectroscopy. However, calibrated models generally fail when new instruments are introduced or replacement of the instrument parts occurs. Therefore, calibration transfer becomes necessary to avoid the costly, time-consuming recalibration of models. Piecewise Direct Standardization (PDS) has been proven to be a reference method for standardization. In this paper, Artificial Neural Network (ANN) is employed as an alternative to transfer spectra between instruments. Two Acousto-optic Tunable Filter NIR spectrometers are employed in the experiment. Spectra of glucose solution are collected on the spectrometers through transflectance mode. A Back propagation Network with two layers is employed to simulate the function between instruments piecewisely. Standardization subset is selected by Kennard and Stone (K-S) algorithm in the first two score space of Principal Component Analysis (PCA) of spectra matrix. In current experiment, it is noted that obvious nonlinearity exists between instruments and attempts are made to correct such nonlinear effect. Prediction results before and after successful calibration transfer are compared. Successful transfer can be achieved by adapting window size and training parameters. Final results reveal that ANN is effective in correcting the nonlinear instrumental difference and a only 1.5~2 times larger prediction error is expected after successful transfer.

  19. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  20. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  1. Calibrating Nonlinear Soil Material Properties for Seismic Analysis Using Soil Material Properties Intended for Linear Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stressmore » and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.« less

  2. Urban stormwater capture curve using three-parameter mixed exponential probability density function and NRCS runoff curve number method.

    PubMed

    Kim, Sangdan; Han, Suhee

    2010-01-01

    Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.

  3. A computational model-based validation of Guyton's analysis of cardiac output and venous return curves

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.; Mark, R. G.

    2002-01-01

    Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.

  4. Reflection and Transmission of a Focused Finite Amplitude Sound Beam Incident on a Curved Interface

    NASA Astrophysics Data System (ADS)

    Makin, Inder Raj Singh

    Reflection and transmission of a finite amplitude focused sound beam at a weakly curved interface separating two fluid-like media are investigated. The KZK parabolic wave equation, which accounts for thermoviscous absorption, diffraction, and nonlinearity, is used to describe the high intensity focused beam. The first part of the work deals with the quasilinear analysis of a weakly nonlinear beam after its reflection and transmission from a curved interface. A Green's function approach is used to define the field integrals describing the primary and the nonlinearly generated second harmonic beam. Closed-form solutions are obtained for the primary and second harmonic beams when a Gaussian amplitude distribution at the source is assumed. The second part of the research uses a numerical frequency domain solution of the KZK equation for a fully nonlinear analysis of the reflected and transmitted fields. Both piston and Gaussian sources are considered. Harmonic components generated in the medium due to propagation of the focused beam are evaluated, and formation of shocks in the reflected and transmitted beams is investigated. A finite amplitude focused beam is observed to be modified due to reflection and transmission from a curved interface in a manner distinct from that in the case of a small signal beam. Propagation curves, beam patterns, phase plots and time waveforms for various parameters defining the source and media pairs are presented, highlighting the effect of the interface curvature on the reflected and transmitted beams. Relevance of the current work to biomedical applications of ultrasound is discussed.

  5. p-Curve and p-Hacking in Observational Research.

    PubMed

    Bruns, Stephan B; Ioannidis, John P A

    2016-01-01

    The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable.

  6. p-Curve and p-Hacking in Observational Research

    PubMed Central

    Bruns, Stephan B.; Ioannidis, John P. A.

    2016-01-01

    The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable. PMID:26886098

  7. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  8. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  9. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  10. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  11. Accuracy and efficiency of published film dosimetry techniques using a flat-bed scanner and EBT3 film.

    PubMed

    Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T

    2018-03-01

    Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.

  12. Optimization, evaluation and calibration of a cross-strip DOI detector

    NASA Astrophysics Data System (ADS)

    Schmidt, F. P.; Kolb, A.; Pichler, B. J.

    2018-02-01

    This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12  ×  12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.

  13. Optimization, evaluation and calibration of a cross-strip DOI detector.

    PubMed

    Schmidt, F P; Kolb, A; Pichler, B J

    2018-02-20

    This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12  ×  12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.

  14. Nonlinear gamma correction via normed bicoherence minimization in optical fringe projection metrology

    NASA Astrophysics Data System (ADS)

    Kamagara, Abel; Wang, Xiangzhao; Li, Sikun

    2018-03-01

    We propose a method to compensate for the projector intensity nonlinearity induced by gamma effect in three-dimensional (3-D) fringe projection metrology by extending high-order spectra analysis and bispectral norm minimization to digital sinusoidal fringe pattern analysis. The bispectrum estimate allows extraction of vital signal information features such as spectral component correlation relationships in fringe pattern images. Our approach exploits the fact that gamma introduces high-order harmonic correlations in the affected fringe pattern image. Estimation and compensation of projector nonlinearity is realized by detecting and minimizing the normed bispectral coherence of these correlations. The proposed technique does not require calibration information and technical knowledge or specification of fringe projection unit. This is promising for developing a modular and calibration-invariant model for intensity nonlinear gamma compensation in digital fringe pattern projection profilometry. Experimental and numerical simulation results demonstrate this method to be efficient and effective in improving the phase measuring accuracies with phase-shifting fringe pattern projection profilometry.

  15. Nonlinear Circuit Concepts -- An Elementary Experiment.

    ERIC Educational Resources Information Center

    Matolyak, J.; And Others

    1983-01-01

    Describes equipment and procedures for an experiment using diodes to introduce non-linear electronic devices in a freshman physics laboratory. The experiment involves calculation and plotting of the characteristic-curve and load-line to predict the operating point and compare prediction to experimentally determined values. Background information…

  16. Radiance calibration of the High Altitude Observatory white-light coronagraph on Skylab

    NASA Technical Reports Server (NTRS)

    Poland, A. I.; Macqueen, R. M.; Munro, R. H.; Gosling, J. T.

    1977-01-01

    The processing of over 35,000 photographs of the solar corona obtained by the white-light coronograph on Skylab is described. Calibration of the vast amount of data was complicated by temporal effects of radiation fog and latent image loss. These effects were compensated by imaging a calibration step wedge on each data frame. Absolute calibration of the wedge was accomplished through comparison with a set of previously calibrated glass opal filters. Analysis employed average characteristic curves derived from measurements of step wedges from many frames within a given camera half-load. The net absolute accuracy of a given radiance measurement is estimated to be 20%.

  17. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2004 and 2005 Tests)

    NASA Technical Reports Server (NTRS)

    Arrington, E. Allen; Pastor, Christine M.; Gonsalez, Jose C.; Curry, Monroe R., III

    2010-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel was completed in 2004 following the replacement of the inlet guide vanes upstream of the tunnel drive system and improvement to the facility total temperature instrumentation. This calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT. The 2004 test was also the first to use the 2-D RTD array, an improved total temperature calibration measurement platform.

  18. Carbon-14 wiggle-match dating of peat deposits: advantages and limitations

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; van Geel, Bas; Mauquoy, Dmitri; van der Plicht, Johannes

    2004-02-01

    Carbon-14 wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a series of closely spaced peat 14C dates with the 14C calibration curve. The method of WMD is discussed, and its advantages and limitations are compared with calibration of individual dates. A numerical approach to WMD is introduced that makes it possible to assess the precision of WMD chronologies. During several intervals of the Holocene, the 14C calibration curve shows less pronounced fluctuations. We assess whether wiggle-matching is also a feasible strategy for these parts of the 14C calibration curve. High-precision chronologies, such as obtainable with WMD, are needed for studies of rapid climate changes and their possible causes during the Holocene. Copyright

  19. Melting Curve of Molecular Crystal GeI4

    NASA Astrophysics Data System (ADS)

    Fuchizaki, Kazuhiro; Hamaya, Nozomu

    2014-07-01

    In situ synchrotron x-ray diffraction measurements were carried out to determine the melting curve of the molecular crystal GeI4. We found that the melting line rapidly increases with a pressure up to about 3 GPa, at which it abruptly breaks. Such a strong nonlinear shape of the melting curve can be approximately captured by the Kumari-Dass-Kechin equation. The parameters involved in the equation could be determined from the equation of state for the crystalline phase, which was also established in the present study. The melting curve predicted from the equation approaches the actual melting curve as the degree of approximation involved in obtaining the equation is improved. However, the treatment is justifiable only if the slope of the melting curve is everywhere continuous. We believe that this is not the case for GeI4's melting line at the breakpoint, as inferred from the nature of breakdown of the Kraut-Kennedy and the Magalinskii-Zubov relationships.The breakpoint may then be a triple point among the crystalline phase and two possible liquid phases.

  20. Optical isolation with nonlinear topological photonics

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Wang, You; Leykam, Daniel; Chong, Y. D.

    2017-09-01

    It is shown that the concept of topological phase transitions can be used to design nonlinear photonic structures exhibiting power thresholds and discontinuities in their transmittance. This provides a novel route to devising nonlinear optical isolators. We study three representative designs: (i) a waveguide array implementing a nonlinear 1D Su-Schrieffer-Heeger model, (ii) a waveguide array implementing a nonlinear 2D Haldane model, and (iii) a 2D lattice of coupled-ring waveguides. In the first two cases, we find a correspondence between the topological transition of the underlying linear lattice and the power threshold of the transmittance, and show that the transmission behavior is attributable to the emergence of a self-induced topological soliton. In the third case, we show that the topological transition produces a discontinuity in the transmittance curve, which can be exploited to achieve sharp jumps in the power-dependent isolation ratio.

  1. Development and verification of an innovative photomultiplier calibration system with a 10-fold increase in photometer resolution

    NASA Astrophysics Data System (ADS)

    Jiang, Shyh-Biau; Yeh, Tse-Liang; Chen, Li-Wu; Liu, Jann-Yenq; Yu, Ming-Hsuan; Huang, Yu-Qin; Chiang, Chen-Kiang; Chou, Chung-Jen

    2018-05-01

    In this study, we construct a photomultiplier calibration system. This calibration system can help scientists measuring and establishing the characteristic curve of the photon count versus light intensity. The system uses an innovative 10-fold optical attenuator to enable an optical power meter to calibrate photomultiplier tubes which have the resolution being much greater than that of the optical power meter. A simulation is firstly conducted to validate the feasibility of the system, and then the system construction, including optical design, circuit design, and software algorithm, is realized. The simulation generally agrees with measurement data of the constructed system, which are further used to establish the characteristic curve of the photon count versus light intensity.

  2. Common Envelope Light Curves. I. Grid-code Module Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, Pablo; Marco, Orsola De; Staff, Jan E.

    The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been putmore » forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8  M {sub ⊙} red giant branch star interacts with a 0.6  M {sub ⊙} companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.« less

  3. Color calibration and color-managed medical displays: does the calibration method matter?

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Rehm, Kelly; Silverstein, Louis D.; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.

    2010-02-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ▵E = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target.

  4. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  5. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  6. Numerical study of turbulent secondary flows in curved ducts

    NASA Technical Reports Server (NTRS)

    Hur, N.; Thangam, S.; Speziale, C. G.

    1990-01-01

    The pressure driven, fully-developed turbulent flow of an incompressible viscous fluid in curved ducts of square-section is studied numerically by making use of a finite volume method. A nonlinear Kappa - Iota model is used to represent the turbulence. The results for both straight and curved ducts are presented. For the case of fully-developed turbulent flow in straight and curved ducts, the secondary flow is characterized by an eight-vortex structure for which the computed flowfield is shown to be in good agreement with available experimental data. The introduction of moderate curvature is shown to cause a substantial increase in the strength of the secondary flow and to change the secondary flow pattern to either a double-vortex or a four-vortex configuration.

  7. Estimation of Pulse Transit Time as a Function of Blood Pressure Using a Nonlinear Arterial Tube-Load Model.

    PubMed

    Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna

    2017-07-01

    pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.

  8. Identification of nonlinear modes using phase-locked-loop experimental continuation and normal form

    NASA Astrophysics Data System (ADS)

    Denis, V.; Jossic, M.; Giraud-Audine, C.; Chomette, B.; Renault, A.; Thomas, O.

    2018-06-01

    In this article, we address the model identification of nonlinear vibratory systems, with a specific focus on systems modeled with distributed nonlinearities, such as geometrically nonlinear mechanical structures. The proposed strategy theoretically relies on the concept of nonlinear modes of the underlying conservative unforced system and the use of normal forms. Within this framework, it is shown that without internal resonance, a valid reduced order model for a nonlinear mode is a single Duffing oscillator. We then propose an efficient experimental strategy to measure the backbone curve of a particular nonlinear mode and we use it to identify the free parameters of the reduced order model. The experimental part relies on a Phase-Locked Loop (PLL) and enables a robust and automatic measurement of backbone curves as well as forced responses. It is theoretically and experimentally shown that the PLL is able to stabilize the unstable part of Duffing-like frequency responses, thus enabling its robust experimental measurement. Finally, the whole procedure is tested on three experimental systems: a circular plate, a chinese gong and a piezoelectric cantilever beam. It enable to validate the procedure by comparison to available theoretical models as well as to other experimental identification methods.

  9. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  10. State estimation with incomplete nonlinear constraint

    NASA Astrophysics Data System (ADS)

    Huang, Yuan; Wang, Xueying; An, Wei

    2017-10-01

    A problem of state estimation with a new constraints named incomplete nonlinear constraint is considered. The targets are often move in the curve road, if the width of road is neglected, the road can be considered as the constraint, and the position of sensors, e.g., radar, is known in advance, this info can be used to enhance the performance of the tracking filter. The problem of how to incorporate the priori knowledge is considered. In this paper, a second-order sate constraint is considered. A fitting algorithm of ellipse is adopted to incorporate the priori knowledge by estimating the radius of the trajectory. The fitting problem is transformed to the nonlinear estimation problem. The estimated ellipse function is used to approximate the nonlinear constraint. Then, the typical nonlinear constraint methods proposed in recent works can be used to constrain the target state. Monte-Carlo simulation results are presented to illustrate the effectiveness proposed method in state estimation with incomplete constraint.

  11. Time-Dependent Behavior of Diabase and a Nonlinear Creep Model

    NASA Astrophysics Data System (ADS)

    Yang, Wendong; Zhang, Qiangyong; Li, Shucai; Wang, Shugang

    2014-07-01

    Triaxial creep tests were performed on diabase specimens from the dam foundation of the Dagangshan hydropower station, and the typical characteristics of creep curves were analyzed. Based on the test results under different stress levels, a new nonlinear visco-elasto-plastic creep model with creep threshold and long-term strength was proposed by connecting an instantaneous elastic Hooke body, a visco-elasto-plastic Schiffman body, and a nonlinear visco-plastic body in series mode. By introducing the nonlinear visco-plastic component, this creep model can describe the typical creep behavior, which includes the primary creep stage, the secondary creep stage, and the tertiary creep stage. Three-dimensional creep equations under constant stress conditions were deduced. The yield approach index (YAI) was used as the criterion for the piecewise creep function to resolve the difficulty in determining the creep threshold value and the long-term strength. The expression of the visco-plastic component was derived in detail and the three-dimensional central difference form was given. An example was used to verify the credibility of the model. The creep parameters were identified, and the calculated curves were in good agreement with the experimental curves, indicating that the model is capable of replicating the physical processes.

  12. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    PubMed

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei

    2013-08-01

    Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.

  14. Larger Optics and Improved Calibration Techniques for Small Satellite Observations with the ERAU OSCOM System

    NASA Astrophysics Data System (ADS)

    Bilardi, S.; Barjatya, A.; Gasdia, F.

    OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.

  15. Modeling of Triangular Lattice Space Structures with Curved Battens

    NASA Technical Reports Server (NTRS)

    Chen, Tzikang; Wang, John T.

    2005-01-01

    Techniques for simulating an assembly process of lattice structures with curved battens were developed. The shape of the curved battens, the tension in the diagonals, and the compression in the battens were predicted for the assembled model. To be able to perform the assembly simulation, a cable-pulley element was implemented, and geometrically nonlinear finite element analyses were performed. Three types of finite element models were created from assembled lattice structures for studying the effects of design and modeling variations on the load carrying capability. Discrepancies in the predictions from these models were discussed. The effects of diagonal constraint failure were also studied.

  16. A simple topography-driven, calibration-free runoff generation model

    NASA Astrophysics Data System (ADS)

    Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.

    2017-12-01

    Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader

  17. A Comparison of Radiometric Calibration Techniques for Lunar Impact Flashes

    NASA Technical Reports Server (NTRS)

    Suggs, R.

    2016-01-01

    Video observations of lunar impact flashes have been made by a number of researchers since the late 1990's and the problem of determination of the impact energies has been approached in different ways (Bellot Rubio, et al., 2000 [1], Bouley, et al., 2012.[2], Suggs, et al. 2014 [3], Rembold and Ryan 2015 [4], Ortiz, et al. 2015 [5]). The wide spectral response of the unfiltered video cameras in use for all published measurements necessitates color correction for the standard filter magnitudes available for the comparison stars. An estimate of the color of the impact flash is also needed to correct it to the chosen passband. Magnitudes corrected to standard filters are then used to determine the luminous energy in the filter passband according to the stellar atmosphere calibrations of Bessell et al., 1998 [6]. Figure 1 illustrates the problem. The camera pass band is the wide black curve and the blue, green, red, and magenta curves show the band passes of the Johnson-Cousins B, V, R, and I filters for which we have calibration star magnitudes. The blackbody curve of an impact flash of temperature 2800K (Nemtchinov, et al., 1998 [7]) is the dashed line. This paper compares the various photometric calibration techniques and how they address the color corrections necessary for the calculation of luminous energy (radiometry) of impact flashes. This issue has significant implications for determination of luminous efficiency, predictions of impact crater sizes for observed flashes, and the flux of meteoroids in the 10s of grams to kilogram size range.

  18. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  19. An extended CFD model to predict the pumping curve in low pressure plasma etch chamber

    NASA Astrophysics Data System (ADS)

    Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu

    2014-12-01

    Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.

  20. Nonlinear vibration of an axially loaded beam carrying rigid bodies

    NASA Astrophysics Data System (ADS)

    Barry, O.

    2016-12-01

    This paper investigates the nonlinear vibration due to mid-plane stretching of an axially loaded simply supported beam carrying multiple rigid masses. Explicit expressions and closed form solutions of both linear and nonlinear analysis of the present vibration problem are presented for the first time. The validity of the analytical model is demonstrated using finite element analysis and via comparison with the result in the literature. Parametric studies are conducted to examine how the nonlinear frequency and frequency response curve are affected by tension, rotational inertia, and number of intermediate rigid bodies.

  1. Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less

  2. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  3. Excitation power quantities in phase resonance testing of nonlinear systems with phase-locked-loop excitation

    NASA Astrophysics Data System (ADS)

    Peter, Simon; Leine, Remco I.

    2017-11-01

    Phase resonance testing is one method for the experimental extraction of nonlinear normal modes. This paper proposes a novel method for nonlinear phase resonance testing. Firstly, the issue of appropriate excitation is approached on the basis of excitation power considerations. Therefore, power quantities known from nonlinear systems theory in electrical engineering are transferred to nonlinear structural dynamics applications. A new power-based nonlinear mode indicator function is derived, which is generally applicable, reliable and easy to implement in experiments. Secondly, the tuning of the excitation phase is automated by the use of a Phase-Locked-Loop controller. This method provides a very user-friendly and fast way for obtaining the backbone curve. Furthermore, the method allows to exploit specific advantages of phase control such as the robustness for lightly damped systems and the stabilization of unstable branches of the frequency response. The reduced tuning time for the excitation makes the commonly used free-decay measurements for the extraction of backbone curves unnecessary. Instead, steady-state measurements for every point of the curve are obtained. In conjunction with the new mode indicator function, the correlation of every measured point with the associated nonlinear normal mode of the underlying conservative system can be evaluated. Moreover, it is shown that the analysis of the excitation power helps to locate sources of inaccuracies in the force appropriation process. The method is illustrated by a numerical example and its functionality in experiments is demonstrated on a benchmark beam structure.

  4. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  5. Investigation of Heat Transfer in Straight and Curved Rectangular Ducts.

    DTIC Science & Technology

    1980-09-01

    theoretical explanation of the heat transfer effects required that all non-linear terms be re- tained in the flow equations. R. Kahawita and R...112, February 1370. 2’. Kahawita , R. and Meroney, R., "The Inffluence of Heating on the Stability of Laminar Boundary Layers Along Con- cave Curved

  6. Experiments with conjugate gradient algorithms for homotopy curve tracking

    NASA Technical Reports Server (NTRS)

    Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.

    1991-01-01

    There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.

  7. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  8. Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System.

    PubMed

    Zhang, Jiarui; Zhang, Yingjie; Chen, Bo

    2017-12-20

    The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.

  9. Optical Mass Displacement Tracking: A simplified field calibration method for the electro-mechanical seismometer.

    NASA Astrophysics Data System (ADS)

    Burk, D. R.; Mackey, K. G.; Hartse, H. E.

    2016-12-01

    We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in

  10. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-02-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  11. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-06-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  12. Temperature dependence of nonlinear optical properties in Li doped nano-carbon bowl material

    NASA Astrophysics Data System (ADS)

    Li, Wei-qi; Zhou, Xin; Chang, Ying; Quan Tian, Wei; Sun, Xiu-Dong

    2013-04-01

    The mechanism for change of nonlinear optical (NLO) properties with temperature is proposed for a nonlinear optical material, Li doped curved nano-carbon bowl. Four stable conformations of Li doped corannulene were located and their electronic properties were investigated in detail. The NLO response of those Li doped conformations varies with relative position of doping agent on the curved carbon surface of corannulene. Conversion among those Li doped conformations, which could be controlled by temperature, changes the NLO response of bulk material. Thus, conformation change of alkali metal doped carbon nano-material with temperature rationalizes the variation of NLO properties of those materials.

  13. Linking Parameters Estimated with the Generalized Graded Unfolding Model: A Comparison of the Accuracy of Characteristic Curve Methods

    ERIC Educational Resources Information Center

    Anderson Koenig, Judith; Roberts, James S.

    2007-01-01

    Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…

  14. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  15. A new calibration code for the JET polarimeter.

    PubMed

    Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E

    2010-05-01

    An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.

  16. Towards a global network of gamma-ray detector calibration facilities

    NASA Astrophysics Data System (ADS)

    Tijs, Marco; Koomans, Ronald; Limburg, Han

    2016-09-01

    Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.

  17. Effective Desynchronization by Nonlinear Delayed Feedback

    NASA Astrophysics Data System (ADS)

    Popovych, Oleksandr V.; Hauptmann, Christian; Tass, Peter A.

    2005-04-01

    We show that nonlinear delayed feedback opens up novel means for the control of synchronization. In particular, we propose a demand-controlled method for powerful desynchronization, which does not require any time-consuming calibration. Our technique distinguishes itself by its robustness against variations of system parameters, even in strongly coupled ensembles of oscillators. We suggest our method for mild and effective deep brain stimulation in neurological diseases characterized by pathological cerebral synchronization.

  18. Planck 2013 results. VIII. HFI photometric calibration and mapmaking

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Filliard, C.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Lellouch, E.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Maurin, L.; Mazzotta, P.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rusholme, B.; Santos, D.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Techene, S.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper describes the methods used to produce photometrically calibrated maps from the Planck High Frequency Instrument (HFI) cleaned, time-ordered information. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To obtain the best calibration accuracy over such a large range, two different photometric calibration schemes have to be used. The 545 and 857 GHz data are calibrated by comparing flux-density measurements of Uranus and Neptune with models of their atmospheric emission. The lower frequencies (below 353 GHz) are calibrated using the solar dipole. A component of this anisotropy is time-variable, owing to the orbital motion of the satellite in the solar system. Photometric calibration is thus tightly linked to mapmaking, which also addresses low-frequency noise removal. By comparing observations taken more than one year apart in the same configuration, we have identified apparent gain variations with time. These variations are induced by non-linearities in the read-out electronics chain. We have developed an effective correction to limit their effect on calibration. We present several methods to estimate the precision of the photometric calibration. We distinguish relative uncertainties (between detectors, or between frequencies) and absolute uncertainties. Absolute uncertainties lie in the range from 0.54% to 10% from 100 to 857 GHz. We describe the pipeline used to produce the maps from the HFI timelines, based on the photometric calibration parameters, and the scheme used to set the zero level of the maps a posteriori. We also discuss the cross-calibration between HFI and the SPIRE instrument on board Herschel. Finally we summarize the basic characteristics of the set of HFI maps included in the 2013 Planck data release.

  19. Development an efficient calibrated nonlocal plate model for nonlinear axial instability of zirconia nanosheets using molecular dynamics simulation.

    PubMed

    Sahmani, S; Fattahi, A M

    2017-08-01

    New ceramic materials containing nanoscaled crystalline phases create a main object of scientific interest due to their attractive advantages such as biocompatibility. Zirconia as a transparent glass ceramic is one of the most useful binary oxides in a wide range of applications. In the present study, a new size-dependent plate model is constructed to predict the nonlinear axial instability characteristics of zirconia nanosheets under axial compressive load. To accomplish this end, the nonlocal continuum elasticity of Eringen is incorporated to a refined exponential shear deformation plate theory. A perturbation-based solving process is put to use to derive explicit expressions for nonlocal equilibrium paths of axial-loaded nanosheets. After that, some molecular dynamics (MD) simulations are performed for axial instability response of square zirconia nanosheets with different side lengths, the results of which are matched with those of the developed nonlocal plate model to capture the proper value of nonlocal parameter. It is demonstrated that the calibrated nonlocal plate model with nonlocal parameter equal to 0.37nm has a very good capability to predict the axial instability characteristics of zirconia nanosheets, the accuracy of which is comparable with that of MD simulation. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. S-NPP VIIRS thermal emissive bands on-orbit calibration and performance

    NASA Astrophysics Data System (ADS)

    Efremova, Boryana; McIntire, Jeff; Moyer, David; Wu, Aisheng; Xiong, Xiaoxiong

    2014-09-01

    Presented is an assessment of the on-orbit radiometric performance of the thermal emissive bands (TEB) of the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) instrument based on data from the first 2 years of operations—from 20 January 2012 to 20 January 2014. The VIIRS TEB are calibrated on orbit using a V-grooved blackbody (BB) as a radiance source. Performance characteristics trended over the life of the mission include the F factor—a measure of the gain change of the TEB detectors; the Noise Equivalent differential Temperature (NEdT)—a measure of the detector noise; and the detector offset and nonlinear terms trended at the quarterly performed BB warm-up cool-down cycles. We find that the BB temperature is well controlled and stable within the 30mK requirement. The F factor trends are very stable and showing little degradation (within 0.8%). The offsets and nonlinearity terms are also without noticeable drifts. NEdT is stable and does not show any trend. Other TEB radiometric calibration-related activities discussed include the on-orbit assessment of the response versus scan-angle functions and an approach to improve the M13 low-gain calibration using onboard lunar measurements. We conclude that all the assessed parameters comply with the requirements, and the TEB provide radiometric measurements with the required accuracy.

  1. Spectral Reconstruction Based on Svm for Cross Calibration

    NASA Astrophysics Data System (ADS)

    Gao, H.; Ma, Y.; Liu, W.; He, H.

    2017-05-01

    Chinese HY-1C/1D satellites will use a 5nm/10nm-resolutional visible-near infrared(VNIR) hyperspectral sensor with the solar calibrator to cross-calibrate with other sensors. The hyperspectral radiance data are composed of average radiance in the sensor's passbands and bear a spectral smoothing effect, a transform from the hyperspectral radiance data to the 1-nm-resolution apparent spectral radiance by spectral reconstruction need to be implemented. In order to solve the problem of noise cumulation and deterioration after several times of iteration by the iterative algorithm, a novel regression method based on SVM is proposed, which can approach arbitrary complex non-linear relationship closely and provide with better generalization capability by learning. In the opinion of system, the relationship between the apparent radiance and equivalent radiance is nonlinear mapping introduced by spectral response function(SRF), SVM transform the low-dimensional non-linear question into high-dimensional linear question though kernel function, obtaining global optimal solution by virtue of quadratic form. The experiment is performed using 6S-simulated spectrums considering the SRF and SNR of the hyperspectral sensor, measured reflectance spectrums of water body and different atmosphere conditions. The contrastive result shows: firstly, the proposed method is with more reconstructed accuracy especially to the high-frequency signal; secondly, while the spectral resolution of the hyperspectral sensor reduces, the proposed method performs better than the iterative method; finally, the root mean square relative error(RMSRE) which is used to evaluate the difference of the reconstructed spectrum and the real spectrum over the whole spectral range is calculated, it decreses by one time at least by proposed method.

  2. Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems

    PubMed Central

    Li, Zhining; Zhang, Yingtang; Yin, Gang

    2018-01-01

    The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544

  3. Simple solution for a complex problem: proanthocyanidins, galloyl glucoses and ellagitannins fit on a single calibration curve in high performance-gel permeation chromatography.

    PubMed

    Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene

    2011-10-28

    This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Satellite Calibration With LED Detectors at Mud Lake

    NASA Technical Reports Server (NTRS)

    Hiller, Jonathan D.

    2005-01-01

    Earth-monitoring instruments in orbit must be routinely calibrated in order to accurately analyze the data obtained. By comparing radiometric measurements taken on the ground in conjunction with a satellite overpass, calibration curves are derived for an orbiting instrument. A permanent, automated facility is planned for Mud Lake, Nevada (a large, homogeneous, dry lakebed) for this purpose. Because some orbiting instruments have low resolution (250 meters per pixel), inexpensive radiometers using LEDs as sensors are being developed to array widely over the lakebed. LEDs are ideal because they are inexpensive, reliable, and sense over a narrow bandwidth. By obtaining and averaging widespread data, errors are reduced and long-term surface changes can be more accurately observed.

  5. CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation

    NASA Astrophysics Data System (ADS)

    Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.

    2018-05-01

    It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.

  6. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  7. Recovery from nonlinear creep provides a window into physics of polymer glasses

    NASA Astrophysics Data System (ADS)

    Caruthers, James; Medvedev, Grigori

    Creep under constant applied stress is one of the most basic mechanical experiments, where it exhibits extremely rich relaxation behavior for polymer glasses. As many as five distinct stages of nonlinear creep are observed, where the rate of creep dramatically slows down, accelerates and then slows down again. Modeling efforts to-date has primarily focused on predicting the intricacies of the nonlinear creep curve. We argue that as much attention should be paid to the creep recovery response, when the stress is removed. The experimental creep recovery curve is smooth, where the rate of recovery is initially quite rapid and then progressively decreases. In contrast, the majority of the traditional constitutive models predict recovery curves that are much too abrupt. A recently developed stochastic constitutive model that takes into account the dynamic heterogeneity of glasses produces a smooth creep recovery response that is consistent with experiment.

  8. Calibration Technique for Polarization-Sensitive Lidars

    NASA Technical Reports Server (NTRS)

    Alvarez, J. M.; Vaughan, M. A.; Hostetler, C. A.; Hung, W. H.; Winker, D. M.

    2006-01-01

    Polarization-sensitive lidars have proven to be highly effective in discriminating between spherical and non-spherical particles in the atmosphere. These lidars use a linearly polarized laser and are equipped with a receiver that can separately measure the components of the return signal polarized parallel and perpendicular to the outgoing beam. In this work we describe a technique for calibrating polarization-sensitive lidars that was originally developed at NASA s Langley Research Center (LaRC) and has been used continually over the past fifteen years. The procedure uses a rotatable half-wave plate inserted into the optical path of the lidar receiver to introduce controlled amounts of polarization cross-talk into a sequence of atmospheric backscatter measurements. Solving the resulting system of nonlinear equations generates the system calibration constants (gain ratio, G, and offset angle, theta) required for deriving calibrated measurements of depolarization ratio from the lidar signals. In addition, this procedure also determines the mean depolarization ratio within the region of the atmosphere that is analyzed. Simulations and error propagation studies show the method to be both reliable and well behaved. Operational details of the technique are illustrated using measurements obtained as part of Langley Research Center s participation in the First ISCCP Regional Experiment (FIRE).

  9. Optical Rotation Curves and Linewidths for Tully-Fisher Applications

    NASA Astrophysics Data System (ADS)

    Courteau, Stephane

    1997-12-01

    We present optical long-slit rotation curves for 304 northern Sb-Sc UGC galaxies from a sample designed for Tully-Fisher (TF) applications. Matching r-band photometry exists for each galaxy. We describe the procedures of rotation curve (RC) extraction and construction of optical profiles analogous to 21 cm integrated linewidths. More than 20% of the galaxies were observed twice or more, allowing for a proper determination of systematic errors. Various measures of maximum rotational velocity to be used as input in the TF relation are tested on the basis of their repeatability, minimization of TF scatter, and match with 21 cm linewidths. The best measure of TF velocity, V2.2 is given at the location of peak rotational velocity of a pure exponential disk. An alternative measure to V2.2 which makes no assumption about the luminosity profile or shape of the rotation curve is Vhist, the 20% width of the velocity histogram, though the match with 21 cm linewidths is not as good. We show that optical TF calibrations yield internal scatter comparable to, if not smaller than, the best calibrations based on single-dish 21 cm radio linewidths. Even though resolved H I RCs are more extended than their optical counterpart, a tight match between optical and radio linewidths exists since the bulk of the H I surface density is enclosed within the optical radius. We model the 304 RCs presented here plus a sample of 958 curves from Mathewson et al. (1992, APJS, 81, 413) with various fitting functions. An arctan function provides an adequate simple fit (not accounting for non-circular motions and spiral arms). More elaborate, empirical models may yield a better match at the expense of strong covariances. We caution against physical or "universal" parametrizations for TF applications.

  10. Accuracy and Calibration of High Explosive Thermodynamic Equations of State

    NASA Astrophysics Data System (ADS)

    Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack

    2010-10-01

    The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.

  11. Hydrological modelling of the Mara River Basin, Kenya: Dealing with uncertain data quality and calibrating using river stage

    NASA Astrophysics Data System (ADS)

    Hulsman, P.; Bogaard, T.; Savenije, H. H. G.

    2016-12-01

    In hydrology and water resources management, discharge is the main time series for model calibration. Rating curves are needed to derive discharge from continuously measured water levels. However, assuring their quality is demanding due to dynamic changes and problems in accurately deriving discharge at high flows. This is valid everywhere, but even more in African socio-economic context. To cope with these uncertainties, this study proposes to use water levels instead of discharge data for calibration. Also uncertainties in rainfall measurements, especially the spatial heterogeneity needs to be considered. In this study, the semi-distributed rainfall runoff model FLEX-Topo was applied to the Mara River Basin. In this model seven sub-basins were distinguished and four hydrological response units with each a unique model structure based on the expected dominant flow processes. Parameter and process constrains were applied to exclude unrealistic results. To calibrate the model, the water levels were back-calculated from modelled discharges, using cross-section data and the Strickler formula calibrating parameter `k•s1/2', and compared to measured water levels. The model simulated the water depths well for the entire basin and the Nyangores sub-basin in the north. However, the calibrated and observed rating curves differed significantly at the basin outlet, probably due to uncertainties in the measured discharge, but at Nyangores they were almost identical. To assess the effect of rainfall uncertainties on the hydrological model, the representative rainfall in each sub-basin was estimated with three different methods: 1) single station, 2) average precipitation, 3) areal sub-division using Thiessen polygons. All three methods gave on average similar results, but method 1 resulted in more flashy responses, method 2 dampened the water levels due to averaging the rainfall and method 3 was a combination of both. In conclusion, in the case of unreliable rating curves

  12. Out-of-unison resonance in weakly nonlinear coupled oscillators

    PubMed Central

    Hill, T. L.; Cammarano, A.; Neild, S. A.; Wagg, D. J.

    2015-01-01

    Resonance is an important phenomenon in vibrating systems and, in systems of nonlinear coupled oscillators, resonant interactions can occur between constituent parts of the system. In this paper, out-of-unison resonance is defined as a solution in which components of the response are 90° out-of-phase, in contrast to the in-unison responses that are normally considered. A well-known physical example of this is whirling, which can occur in a taut cable. Here, we use a normal form technique to obtain time-independent functions known as backbone curves. Considering a model of a cable, this approach is used to identify out-of-unison resonance and it is demonstrated that this corresponds to whirling. We then show how out-of-unison resonance can occur in other two degree-of-freedom nonlinear oscillators. Specifically, an in-line oscillator consisting of two masses connected by nonlinear springs—a type of system where out-of-unison resonance has not previously been identified—is shown to have specific parameter regions where out-of-unison resonance can occur. Finally, we demonstrate how the backbone curve analysis can be used to predict the responses of forced systems. PMID:25568619

  13. The Workload Curve: Subjective Mental Workload.

    PubMed

    Estes, Steven

    2015-11-01

    In this paper I begin looking for evidence of a subjective workload curve. Results from subjective mental workload assessments are often interpreted linearly. However, I hypothesized that ratings of subjective mental workload increase nonlinearly with unitary increases in working memory load. Two studies were conducted. In the first, the participant provided ratings of the mental difficulty of a series of digit span recall tasks. In the second study, participants provided ratings of mental difficulty associated with recall of visual patterns. The results of the second study were then examined using a mathematical model of working memory. An S curve, predicted a priori, was found in the results of both the digit span and visual pattern studies. A mathematical model showed a tight fit between workload ratings and levels of working memory activation. This effort provides good initial evidence for the existence of a workload curve. The results support further study in applied settings and other facets of workload (e.g., temporal workload). Measures of subjective workload are used across a wide variety of domains and applications. These results bear on their interpretation, particularly as they relate to workload thresholds. © 2015, Human Factors and Ergonomics Society.

  14. Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model

    USGS Publications Warehouse

    Wu, J.; Shenk, G.W.; Raffensperger, Jeff P.; Moyer, D.; Linker, L.C.; ,

    2005-01-01

    Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.

  15. Nonlinear resonance of the rotating circular plate under static loads in magnetic field

    NASA Astrophysics Data System (ADS)

    Hu, Yuda; Wang, Tong

    2015-11-01

    The rotating circular plate is widely used in mechanical engineering, meanwhile the plates are often in the electromagnetic field in modern industry with complex loads. In order to study the resonance of a rotating circular plate under static loads in magnetic field, the nonlinear vibration equation about the spinning circular plate is derived according to Hamilton principle. The algebraic expression of the initial deflection and the magneto elastic forced disturbance differential equation are obtained through the application of Galerkin integral method. By mean of modified Multiple scale method, the strongly nonlinear amplitude-frequency response equation in steady state is established. The amplitude frequency characteristic curve and the relationship curve of amplitude changing with the static loads and the excitation force of the plate are obtained according to the numerical calculation. The influence of magnetic induction intensity, the speed of rotation and the static loads on the amplitude and the nonlinear characteristics of the spinning plate are analyzed. The proposed research provides the theory reference for the research of nonlinear resonance of rotating plates in engineering.

  16. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  17. Calibration of the Concorde radiation detection instrument and measurements at SST altitude.

    DOT National Transportation Integrated Search

    1971-06-01

    Performance tests were carried out on a solar cosmic radiation detection instrument developed for the Concorde SST. The instrument calibration curve (log dose-rate vs instrument reading) was reasonably linear from 0.004 to 1 rem/hr for both gamma rad...

  18. The Changing Nonlinear Relationship between Income and Terrorism

    PubMed Central

    Enders, Walter; Hoover, Gary A.

    2014-01-01

    This article reinvestigates the relationship between real per capita gross domestic product (GDP) and terrorism. We devise a terrorism Lorenz curve to show that domestic and transnational terrorist attacks are each more concentrated in middle-income countries, thereby suggesting a nonlinear income–terrorism relationship. Moreover, this point of concentration shifted to lower income countries after the rising influence of the religious fundamentalist and nationalist/separatist terrorists in the early 1990s. For transnational terrorist attacks, this shift characterized not only the attack venue but also the perpetrators’ nationality. The article then uses nonlinear smooth transition regressions to establish the relationship between real per capita GDP and terrorism for eight alternative terrorism samples, accounting for venue, perpetrators’ nationality, terrorism type, and the period. Our nonlinear estimates are shown to be favored over estimates using linear or quadratic income determinants of terrorism. These nonlinear estimates are robust to additional controls. PMID:28579636

  19. Data-Driven Method to Estimate Nonlinear Chemical Equivalence.

    PubMed

    Mayo, Michael; Collier, Zachary A; Winton, Corey; Chappell, Mark A

    2015-01-01

    There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of "equivalency factors," which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or "biphasic," responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are "parallel," which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach.

  20. [A plane-based hand-eye calibration method for surgical robots].

    PubMed

    Zeng, Bowei; Meng, Fanle; Ding, Hui; Liu, Wenbo; Wu, Di; Wang, Guangzhi

    2017-04-01

    In order to calibrate the hand-eye transformation of the surgical robot and laser range finder (LRF), a calibration algorithm based on a planar template was designed. A mathematical model of the planar template had been given and the approach to address the equations had been derived. Aiming at the problems of the measurement error in a practical system, we proposed a new algorithm for selecting coplanar data. This algorithm can effectively eliminate considerable measurement error data to improve the calibration accuracy. Furthermore, three orthogonal planes were used to improve the calibration accuracy, in which a nonlinear optimization for hand-eye calibration was used. With the purpose of verifying the calibration precision, we used the LRF to measure some fixed points in different directions and a cuboid's surfaces. Experimental results indicated that the precision of a single planar template method was (1.37±0.24) mm, and that of the three orthogonal planes method was (0.37±0.05) mm. Moreover, the mean FRE of three-dimensional (3D) points was 0.24 mm and mean TRE was 0.26 mm. The maximum angle measurement error was 0.4 degree. Experimental results show that the method presented in this paper is effective with high accuracy and can meet the requirements of surgical robot precise location.

  1. The intrinsic mechanical nonlinearity 3Q0(ω) of linear homopolymer melts

    NASA Astrophysics Data System (ADS)

    Cziep, Miriam Angela; Abbasi, Mahdi; Wilhelm, Manfred

    2017-05-01

    Medium amplitude oscillatory shear (MAOS) in combination with Fourier Transformation of the mechanical stress signal (FT rheology) was utilized to investigate the influence of molecular weight, molecular weight distribution and the monomer on the intrinsic nonlinearity 3Q0(ω). Nonlinear master curves of 3Q0(ω) have been created, applying the time-temperature superposition (TTS) principle. These master curves showed a characteristic shape with an increasing slope at small frequencies, a maximum 3Q0,max and a decreasing slope at high frequencies. 3Q0(De) master curves of monodisperse polymers were evaluated and quantified with the help of a semi-empiric equation, derived from predictions from the pom-pom and molecular stress function (MSF) models. This resulted in a monomer independent description of the nonlinear mechanical behavior of linear, monodisperse homopolymer melts, where 3Q0(ω,Z) is only a function of the frequency ω and the number of entanglements Z. For polydisperse samples, 3Q0(ω) showed a high sensitivity within the experimental window towards an increasing PDI. At small frequencies, the slope of 3Q0(ω) decreases until approximately zero as a plateau value is reached, starting at a PDI around 2 and higher.

  2. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  3. On Light-Like Extremal Surfaces in Curved Spacetimes

    NASA Astrophysics Data System (ADS)

    Huang, Shou-Jun; He, Chun-Lei

    2014-01-01

    In this paper, we are concerned with light-like extremal surfaces in curved spacetimes. It is interesting to find that under a diffeomorphic transformation of variables, the light-like extremal surfaces can be described by a system of nonlinear geodesic equations. Particularly, we investigate the light-like extremal surfaces in Schwarzschild spacetime in detail and some new special solutions are derived systematically with aim to compare with the known results and to illustrate the method.

  4. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  5. Fluorescence calibration method for single-particle aerosol fluorescence instruments

    NASA Astrophysics Data System (ADS)

    Shipley Robinson, Ellis; Gao, Ru-Shan; Schwarz, Joshua P.; Fahey, David W.; Perring, Anne E.

    2017-05-01

    Real-time, single-particle fluorescence instruments used to detect atmospheric bioaerosol particles are increasingly common, yet no standard fluorescence calibration method exists for this technique. This gap limits the utility of these instruments as quantitative tools and complicates comparisons between different measurement campaigns. To address this need, we have developed a method to produce size-selected particles with a known mass of fluorophore, which we use to calibrate the fluorescence detection of a Wideband Integrated Bioaerosol Sensor (WIBS-4A). We use mixed tryptophan-ammonium sulfate particles to calibrate one detector (FL1; excitation = 280 nm, emission = 310-400 nm) and pure quinine particles to calibrate the other (FL2; excitation = 280 nm, emission = 420-650 nm). The relationship between fluorescence and mass for the mixed tryptophan-ammonium sulfate particles is linear, while that for the pure quinine particles is nonlinear, likely indicating that not all of the quinine mass contributes to the observed fluorescence. Nonetheless, both materials produce a repeatable response between observed fluorescence and particle mass. This procedure allows users to set the detector gains to achieve a known absolute response, calculate the limits of detection for a given instrument, improve the repeatability of the instrumental setup, and facilitate intercomparisons between different instruments. We recommend calibration of single-particle fluorescence instruments using these methods.

  6. X-ray plane-wave diffraction effects in a crystal with third-order nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balyan, M. K., E-mail: mbalyan@ysu.am

    The two-wave dynamical diffraction in the Laue geometry has been theoretically considered for a plane X-ray wave in a crystal with a third-order nonlinear response to the external field. An analytical solution to the problem stated is found for certain diffraction conditions. A nonlinear pendulum effect is analyzed. The nonlinear extinction length is found to depend on the incident-wave intensity. A pendulum effect of a new type is revealed: the intensities of the transmitted and diffracted waves periodically depend on the incidentwave intensity at a fixed crystal thickness. The rocking curves and Borrmann nonlinear effect are numerically calculated.

  7. Multivariate curve resolution-assisted determination of pseudoephedrine and methamphetamine by HPLC-DAD in water samples.

    PubMed

    Vosough, Maryam; Mohamedian, Hadi; Salemi, Amir; Baheri, Tahmineh

    2015-02-01

    In the present study, a simple strategy based on solid-phase extraction (SPE) with a cation exchange sorbent (Finisterre SCX) followed by fast high-performance liquid chromatography (HPLC) with diode array detection coupled with chemometrics tools has been proposed for the determination of methamphetamine and pseudoephedrine in ground water and river water. At first, the HPLC and SPE conditions were optimized and the analytical performance of the method was determined. In the case of ground water, determination of analytes was successfully performed through univariate calibration curves. For river water sample, multivariate curve resolution and alternating least squares was implemented and the second-order advantage was achieved in samples containing uncalibrated interferences and uncorrected background signals. The calibration curves showed good linearity (r(2) > 0.994).The limits of detection for pseudoephedrine and methamphetamine were 0.06 and 0.08 μg/L and the average recovery values were 104.7 and 102.3% in river water, respectively. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  9. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Joshi, H; Saunderson, J R; Beavis, A W

    2016-11-07

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQ m and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  10. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Joshi, H.; Saunderson, J. R.; Beavis, A. W.

    2016-11-01

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQm and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  11. Assessing and calibrating the ATR-FTIR approach as a carbonate rock characterization tool

    NASA Astrophysics Data System (ADS)

    Henry, Delano G.; Watson, Jonathan S.; John, Cédric M.

    2017-01-01

    ATR-FTIR (attenuated total reflectance Fourier transform infrared) spectroscopy can be used as a rapid and economical tool for qualitative identification of carbonates, calcium sulphates, oxides and silicates, as well as quantitatively estimating the concentration of minerals. Over 200 powdered samples with known concentrations of two, three, four and five phase mixtures were made, then a suite of calibration curves were derived that can be used to quantify the minerals. The calibration curves in this study have an R2 that range from 0.93-0.99, a RMSE (root mean square error) of 1-5 wt.% and a maximum error of 3-10 wt.%. The calibration curves were used on 35 geological samples that have previously been studied using XRD (X-ray diffraction). The identification of the minerals using ATR-FTIR is comparable with XRD and the quantitative results have a RMSD (root mean square deviation) of 14% and 12% for calcite and dolomite respectively when compared to XRD results. ATR-FTIR is a rapid technique (identification and quantification takes < 5 min) that involves virtually no cost if the machine is available. It is a common tool in most analytical laboratories, but it also has the potential to be deployed on a rig for real-time data acquisition of the mineralogy of cores and rock chips at the surface as there is no need for special sample preparation, rapid data collection and easy analysis.

  12. Nonlinear model identification and spectral submanifolds for multi-degree-of-freedom mechanical vibrations

    NASA Astrophysics Data System (ADS)

    Szalai, Robert; Ehrhardt, David; Haller, George

    2017-06-01

    In a nonlinear oscillatory system, spectral submanifolds (SSMs) are the smoothest invariant manifolds tangent to linear modal subspaces of an equilibrium. Amplitude-frequency plots of the dynamics on SSMs provide the classic backbone curves sought in experimental nonlinear model identification. We develop here, a methodology to compute analytically both the shape of SSMs and their corresponding backbone curves from a data-assimilating model fitted to experimental vibration signals. This model identification utilizes Taken's delay-embedding theorem, as well as a least square fit to the Taylor expansion of the sampling map associated with that embedding. The SSMs are then constructed for the sampling map using the parametrization method for invariant manifolds, which assumes that the manifold is an embedding of, rather than a graph over, a spectral subspace. Using examples of both synthetic and real experimental data, we demonstrate that this approach reproduces backbone curves with high accuracy.

  13. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  14. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  15. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.

    2014-05-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  16. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  17. Third order nonlinearity in pulsed laser deposited LiNbO{sub 3} thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumuluri, Anil; Rapolu, Mounika; Rao, S. Venugopal, E-mail: kcjrsp@uohyd.ernet.in, E-mail: svrsp@uohyd.ernet.in

    2016-05-06

    Lithium niobate (LiNbO{sub 3}) thin films were prepared using pulsed laser deposition technique. Structural properties of the same were examined from XRD and optical band gap of the thin films were measured from transmittance spectra recorded using UV-Visible spectrophotometer. Nonlinear optical properties of the thin films were recorded using Z-Scan technique. The films were exhibiting third order nonlinearity and their corresponding two photon absorption, nonlinear refractive index, real and imaginary part of nonlinear susceptibility were calculated from open aperture and closed aperture transmission curves. From these studies, it suggests that these films have potential applications in nonlinear optical devices.

  18. Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Moore, R. C.

    1985-01-01

    Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.

  19. Rock magnetic properties of dusty olivine: comparison and calibration of non-heating paleointensity methods

    NASA Astrophysics Data System (ADS)

    Lappe, S. L.; Harrison, R. J.; Feinberg, J. M.

    2012-12-01

    The mechanism of chondrule formation is an important outstanding question in cosmochemistry. Magnetic signals recorded by Fe-Ni nanoparticles in chondrules could carry clues to their origin. Recently, research in this area has focused on 'dusty olivine' in ordinary chondrites as potential carriers of pre-accretionary remanence. Dusty olivine is characterised by the presence of sub-micron Fe-Ni inclusions within the olivine host. These metal particles form via subsolidus reduction of the olivine during chondrule formation and are thought to be protected from subsequent chemical and thermal alteration by the host olivine. Three sets of synthetic dusty olivines have been produced, using natural olivine (average Ni-content of 0.3 wt%), synthetic Ni-containing olivine (0.1wt% Ni) and synthetic Ni-free olivine as starting materials. The starting materials were ground to powders, packed into a 8-27 mm3 graphite crucible, heated up to 1350°C under a pure CO gas flow and kept at this temperature for 10 minutes. After this the samples were held in fixed orientation and quenched into water in a range of known magnetic fields from 0.2 mT to 1.5 mT. We present a comparison of all non-heating methods commonly used for paleointensity determination of extraterrestrial material. All samples showed uni-directional, single-component demagnetization behaviour. Saturation REM ratio (NRM/SIRM) and REMc ratio show non-linear behaviour as function of applied field and a saturation value < 1. Using the REM' method the samples showed approximately constant REM' between 100 and 150 mT AF-field. Plotting the average values for this field range again shows non-linear behaviour and a saturation value < 1. Another approach we examined to obtain calibration curves for paleointensity determination is based on ARM measurents. We also present an analysis of a new FORC-based method of paleointensity determination applied to metallic Fe-bearing samples [1, 2]. The method uses a first-order reversal

  20. Analysis of Classes of Superlinear Semipositone Problems with Nonlinear Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Morris, Quinn A.

    We study positive radial solutions for classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We consider p-Laplacian problems (p > 1) with reaction terms which are superlinear at infinity and semipositone. In the case p = 2, using variational methods, we establish the existence of a solution, and via detailed analysis of the Green's function, we prove the positivity of the solution. In the case p ≠ 2, we again use variational methods to establish the existence of a solution, but the positivity of the solution is achieved via sophisticated a priori estimates. In the case p ≠ 2, the Green's function analysis is no longer available. Our results significantly enhance the literature on superlinear semipositone problems. Finally, we provide algorithms for the numerical generation of exact bifurcation curves for one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and using nonlinear solvers in Mathematica, generate bifurcation curves. In the nonautonomous case, we employ shooting methods in Mathematica to generate bifurcation curves.

  1. The advantages of absorbed-dose calibration factors.

    PubMed

    Rogers, D W

    1992-01-01

    A formalism for clinical external beam dosimetry based on use of ion chamber absorbed-dose calibration factors is outlined in the context and notation of the AAPM TG-21 protocol. It is shown that basing clinical dosimetry on absorbed-dose calibration factors ND leads to considerable simplification and reduced uncertainty in dose measurement. In keeping with a protocol which is used in Germany, a quantity kQ is defined which relates an absorbed-dose calibration factor in a beam of quality Q0 to that in a beam of quality Q. For 38 cylindrical ion chambers, two sets of values are presented for ND/NX and Ngas/ND and for kQ for photon beams with beam quality specified by the TPR20(10) ratio. One set is based on TG-21's protocol to allow the new formalism to be used while maintaining equivalence to the TG-21 protocol. To demonstrate the magnitude of the overall error in the TG-21 protocol, the other set uses corrected versions of the TG-21 equations and the more consistent physical data of the IAEA Code of Practice. Comparisons are made to procedures based on air-kerma or exposure calibration factors and it is shown that accuracy and simplicity are gained by avoiding the determination of Ngas from NX. It is also shown that the kQ approach simplifies the use of plastic phantoms in photon beams since kQ values change by less than 0.6% compared to those in water although an overall correction factor of 0.973 is needed to go from absorbed dose in water calibration factors to those in PMMA or polystyrene. Values of kQ calculated using the IAEA Code of Practice are presented but are shown to be anomalous because of the way the effective point of measurement changes for 60Co beams. In photon beams the major difference between the IAEA Code of Practice and the corrected AAPM TG-21 protocol is shown to be the Prepl correction factor. Calculated kQ curves and three parameter equations for them are presented for each wall material and are shown to represent accurately the kQ curve

  2. Energy calibration of the fly's eye detector

    NASA Technical Reports Server (NTRS)

    Baltrusaitis, R. M.; Cassiday, G. L.; Cooper, R.; Elbert, J. W.; Gerhardy, P. R.; Ko, S.; Loh, E. C.; Mizumoto, Y.; Sokolsky, P.; Steck, D.

    1985-01-01

    The methods used to calibrate the Fly's eye detector to evaluate the energy of EAS are discussed. The energy of extensive air showers (EAS) as seen by the Fly's Eye detector are obtained from track length integrals of observed shower development curves. The energy of the parent cosmic ray primary is estimated by applying corrections to account for undetected energy in the muon, neutrino and hadronic channels. Absolute values for E depend upon the measurement of shower sizes N sub e(x). The following items are necessary to convert apparent optical brightness into intrinsical optical brightness: (1) an assessment of those factors responsible for light production by the relativistic electrons in an EAS and the transmission of light thru the atmosphere, (2) calibration of the optical detection system, and (3) a knowledge of the trajectory of the shower.

  3. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    NASA Astrophysics Data System (ADS)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  4. Inferring nonlinear gene regulatory networks from gene expression data based on distance correlation.

    PubMed

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference.

  5. Inferring Nonlinear Gene Regulatory Networks from Gene Expression Data Based on Distance Correlation

    PubMed Central

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference. PMID:24551058

  6. Light curves of flat-spectrum radio sources (Jenness+, 2010)

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Robson, E. I.; Stevens, J. A.

    2010-05-01

    Calibrated data for 143 flat-spectrum extragalactic radio sources are presented at a wavelength of 850um covering a 5-yr period from 2000 April. The data, obtained at the James Clerk Maxwell Telescope using the Submillimetre Common-User Bolometer Array (SCUBA) camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control - Data Reduction (ORAC-DR) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves. A re-analysis of previously published data from 1997 to 2000 is also presented. The combined catalogue, comprising 10493 flux density measurements, provides a unique and valuable resource for studies of extragalactic radio sources. (2 data files).

  7. Flow of nanofluid by nonlinear stretching velocity

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed; Ahmad, Bashir

    2018-03-01

    Main objective in this article is to model and analyze the nanofluid flow induced by curved surface with nonlinear stretching velocity. Nanofluid comprises water and silver. Governing problem is solved by using homotopy analysis method (HAM). Induced magnetic field for low magnetic Reynolds number is not entertained. Development of convergent series solutions for velocity and skin friction coefficient is successfully made. Pressure in the boundary layer flow by curved stretching surface cannot be ignored. It is found that magnitude of power-law index parameter increases for pressure distibutions. Magnitude of radius of curvature reduces for pressure field while opposite trend can be observed for velocity.

  8. Design of airborne imaging spectrometer based on curved prism

    NASA Astrophysics Data System (ADS)

    Nie, Yunfeng; Xiangli, Bin; Zhou, Jinsong; Wei, Xiaoxiao

    2011-11-01

    A novel moderate-resolution imaging spectrometer spreading from visible wavelength to near infrared wavelength range with a spectral resolution of 10 nm, which combines curved prisms with the Offner configuration, is introduced. Compared to conventional imaging spectrometers based on dispersive prism or diffractive grating, this design possesses characteristics of small size, compact structure, low mass as well as little spectral line curve (smile) and spectral band curve (keystone or frown). Besides, the usage of compound curved prisms with two or more different materials can greatly reduce the nonlinearity inevitably brought by prismatic dispersion. The utilization ratio of light radiation is much higher than imaging spectrometer of the same type based on combination of diffractive grating and concentric optics. In this paper, the Seidel aberration theory of curved prism and the optical principles of Offner configuration are illuminated firstly. Then the optical design layout of the spectrometer is presented, and the performance evaluation of this design, including spot diagram and MTF, is analyzed. To step further, several types of telescope matching this system are provided. This work provides an innovational perspective upon optical system design of airborne spectral imagers; therefore, it can offer theoretic guide for imaging spectrometer of the same kind.

  9. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  10. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  11. Practical application of electromyogram radiotelemetry: the suitability of applying laboratory-acquired calibration data to field data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, David R.; Brown, Richard S.; Lepla, Ken

    One of the practical problems with quantifying the amount of energy used by fish implanted with electromyogram (EMG) radio transmitters is that the signals emitted by the transmitter provide only a relative index of activity unless they are calibrated to the swimming speed of the fish. Ideally calibration would be conducted for each fish before it is released, but this is often not possible and calibration curves derived from more than one fish are used to interpret EMG signals from individuals which have not been calibrated. We tested the validity of this approach by comparing EMG data within three groupsmore » of three wild juvenile white sturgeon Acipenser transmontanus implanted with the same EMG radio transmitter. We also tested an additional six fish which were implanted with separate EMG transmitters. Within each group, a single EMG radio transmitter usually did not produce similar results in different fish. Grouping EMG signals among fish produced less accurate results than having individual EMG-swim speed relationships for each fish. It is unknown whether these differences were a result of different swimming performances among individual fish or inconsistencies in the placement or function of the EMG transmitters. In either case, our results suggest that caution should be used when applying calibration curves from one group of fish to another group of uncalibrated fish.« less

  12. Numerical study of turbulent secondary flows in curved ducts

    NASA Technical Reports Server (NTRS)

    Hur, N.; Thangam, S.; Speziale, C. G.

    1989-01-01

    The pressure driven, fully-developed turbulent flow of an incompressible viscous fluid in curved ducts of square cross-section is studied numerically by making use of a finite volume method. A nonlinear Kappa - Iota model is used to represent the turbulence. The results for both straight and curved ducts are presented. For the case of fully-developed turbulent flow in straight ducts, the secondary flow is characterized by an eight-vortex structure for which the computed flowfield is shown to be in good agreement with available experimental data. The introduction of moderate curvature is shown to cause a substantial increase in the strength of the secondary flow and to change the secondary flow pattern to either a double-vortex or a four-vortex configuration.

  13. Nonlinear dynamics near the stability margin in rotating pipe flow

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Leibovich, S.

    1991-01-01

    The nonlinear evolution of marginally unstable wave packets in rotating pipe flow is studied. These flows depend on two control parameters, which may be taken to be the axial Reynolds number R and a Rossby number, q. Marginal stability is realized on a curve in the (R, q)-plane, and the entire marginal stability boundary is explored. As the flow passes through any point on the marginal stability curve, it undergoes a supercritical Hopf bifurcation and the steady base flow is replaced by a traveling wave. The envelope of the wave system is governed by a complex Ginzburg-Landau equation. The Ginzburg-Landau equation admits Stokes waves, which correspond to standing modulations of the linear traveling wavetrain, as well as traveling wave modulations of the linear wavetrain. Bands of wavenumbers are identified in which the nonlinear modulated waves are subject to a sideband instability.

  14. Crash prediction modeling for curved segments of rural two-lane two-way highways in Utah.

    DOT National Transportation Integrated Search

    2015-10-01

    This report contains the results of the development of crash prediction models for curved segments of rural : two-lane two-way highways in the state of Utah. The modeling effort included the calibration of the predictive : model found in the Highway ...

  15. Buckling Behavior of Compression-Loaded Quasi-Isotropic Curved Panels with a Circular Cutout

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Britt, Vicki O.; Nemeth, Michael P.

    1999-01-01

    Results from a numerical and experimental study of the response of compression-loaded quasi-isotropic curved panels with a centrally located circular cutout are presented. The numerical results were obtained by using a geometrically nonlinear finite element analysis code. The effects of cutout size, panel curvature and initial geo- metric imperfections on the overall response of compression-loaded panels are described. In addition, results are presented from a numerical parametric study that indicate the effects of elastic circumferential edge restraints on the prebuckling and buckling response of a selected panel and these numerical results are compared to experimentally measured results. These restraints are used to identify the effects of circumferential edge restraints that are introduced by the test fixture that was used in the present study. It is shown that circumferential edge restraints can introduce substantial nonlinear prebuckling deformations into shallow compression-loaded curved panels that can results in a significant increase in buckling load.

  16. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  17. Third-order nonlinear optical properties of organic azo dyes by using strength of nonlinearity parameter and Z-scan technique

    NASA Astrophysics Data System (ADS)

    Motiei, H.; Jafari, A.; Naderali, R.

    2017-02-01

    In this paper, two chemically synthesized organic azo dyes, 2-(2,5-Dichloro-phenyazo)-5,5-dimethyl-cyclohexane-1,3-dione (azo dye (i)) and 5,5-Dimethyl-2-tolylazo-cyclohexane-1,3-dione (azo dye (ii)), have been studied from optical Kerr nonlinearity point of view. These materials were characterized by Ultraviolet-visible spectroscopy. Experiments were performed using a continous wave diode-pumped laser at 532 nm wavelength in three intensities of the laser beam. Nonlinear absorption (β), refractive index (n2) and third-order susceptibility (χ (3)) of dyes, were calculated. Nonlinear absorption coefficient of dyes have been calculated from two methods; 1) using theoretical fits and experimental data in the Z-scan technique, 2) using the strength of nonlinearity curves. The values of β obtained from both of the methods were approximately the same. The results demonstrated that azo dye (ii) displays better nonlinearity and has a lower two-photon absorption threshold than azo dye (i). Calculated parameter related to strength of nonlinearity for azo dye (ii) was higher than azo dye (i), It may be due to presence of methyl in azo dye (ii) instead of chlorine in azo dye (i). Furthermore, The measured values of third order susceptibility of azo dyes were from the order of 10-9 esu . These azo dyes can be suitable candidate for optical switching devices.

  18. A Nonlinearity Minimization-Oriented Resource-Saving Time-to-Digital Converter Implemented in a 28 nm Xilinx FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2015-10-01

    Because large nonlinearity errors exist in the current tapped-delay line (TDL) style field programmable gate array (FPGA)-based time-to-digital converters (TDC), bin-by-bin calibration techniques have to be resorted for gaining a high measurement resolution. If the TDL in selected FPGAs is significantly affected by changes in ambient temperature, the bin-by-bin calibration table has to be updated as frequently as possible. The on-line calibration and calibration table updating increase the TDC design complexity and limit the system performance to some extent. This paper proposes a method to minimize the nonlinearity errors of TDC bins, so that the bin-by-bin calibration may not be needed while maintaining a reasonably high time resolution. The method is a two pass approach: By a bin realignment, the large number of wasted zero-width bins in the original TDL is reused and the granularity of the bins is improved; by a bin decimation, the bin size and its uniformity is traded-off, and the time interpolation by the delay line turns more precise so that the bin-by-bin calibration is not necessary. Using Xilinx 28 nm FPGAs, in which the TDL property is not very sensitive to ambient temperature, the proposed TDC achieves approximately 15 ps root-mean-square (RMS) time resolution by dual-channel measurements of time-intervals over the range of operating temperature. Because of removing the calibration and less logic resources required for the data post-processing, the method has bigger multi-channel capability.

  19. First full dynamic range calibration of the JUNGFRAU photon detector

    NASA Astrophysics Data System (ADS)

    Redford, S.; Andrä, M.; Barten, R.; Bergamaschi, A.; Brückner, M.; Dinapoli, R.; Fröjdh, E.; Greiffenberg, D.; Lopez-Cuenca, C.; Mezza, D.; Mozzanica, A.; Ramilli, M.; Ruat, M.; Ruder, C.; Schmitt, B.; Shi, X.; Thattil, D.; Tinti, G.; Vetter, S.; Zhang, J.

    2018-01-01

    The JUNGFRAU detector is a charge integrating hybrid silicon pixel detector developed at the Paul Scherrer Institut for photon science applications, in particular for the upcoming free electron laser SwissFEL. With a high dynamic range, analogue readout, low noise and three automatically switching gains, JUNGFRAU promises excellent performance not only at XFELs but also at synchrotrons in areas such as protein crystallography, ptychography, pump-probe and time resolved measurements. To achieve its full potential, the detector must be calibrated on a pixel-by-pixel basis. This contribution presents the current status of the JUNGFRAU calibration project, in which a variety of input charge sources are used to parametrise the energy response of the detector across four orders of magnitude of dynamic range. Building on preliminary studies, the first full calibration procedure of a JUNGFRAU 0.5 Mpixel module is described. The calibration is validated using alternative sources of charge deposition, including laboratory experiments and measurements at ESRF and LCLS. The findings from these measurements are presented. Calibrated modules have already been used in proof-of-principle style protein crystallography experiments at the SLS. A first look at selected results is shown. Aspects such as the conversion of charge to number of photons, treatment of multi-size pixels and the origin of non-linear response are also discussed.

  20. Drift-insensitive distributed calibration of probe microscope scanner in nanometer range: Virtual mode

    NASA Astrophysics Data System (ADS)

    Lapshin, Rostislav V.

    2016-08-01

    A method of distributed calibration of a probe microscope scanner is suggested. The main idea consists in a search for a net of local calibration coefficients (LCCs) in the process of automatic measurement of a standard surface, whereby each point of the movement space of the scanner can be characterized by a unique set of scale factors. Feature-oriented scanning (FOS) methodology is used as a basis for implementation of the distributed calibration permitting to exclude in situ the negative influence of thermal drift, creep and hysteresis on the obtained results. Possessing the calibration database enables correcting in one procedure all the spatial systematic distortions caused by nonlinearity, nonorthogonality and spurious crosstalk couplings of the microscope scanner piezomanipulators. To provide high precision of spatial measurements in nanometer range, the calibration is carried out using natural standards - constants of crystal lattice. One of the useful modes of the developed calibration method is a virtual mode. In the virtual mode, instead of measurement of a real surface of the standard, the calibration program makes a surface image ;measurement; of the standard, which was obtained earlier using conventional raster scanning. The application of the virtual mode permits simulation of the calibration process and detail analysis of raster distortions occurring in both conventional and counter surface scanning. Moreover, the mode allows to estimate the thermal drift and the creep velocities acting while surface scanning. Virtual calibration makes possible automatic characterization of a surface by the method of scanning probe microscopy (SPM).

  1. Structural efficiency studies of corrugated compression panels with curved caps and beaded webs

    NASA Technical Reports Server (NTRS)

    Davis, R. C.; Mills, C. T.; Prabhakaran, R.; Jackson, L. R.

    1984-01-01

    Curved cross-sectional elements are employed in structural concepts for minimum-mass compression panels. Corrugated panel concepts with curved caps and beaded webs are optimized by using a nonlinear mathematical programming procedure and a rigorous buckling analysis. These panel geometries are shown to have superior structural efficiencies compared with known concepts published in the literature. Fabrication of these efficient corrugation concepts became possible by advances made in the art of superplastically forming of metals. Results of the mass optimization studies of the concepts are presented as structural efficiency charts for axial compression.

  2. Quantum spectral curve of the N=6 supersymmetric Chern-Simons theory.

    PubMed

    Cavaglià, Andrea; Fioravanti, Davide; Gromov, Nikolay; Tateo, Roberto

    2014-07-11

    Recently, it was shown that the spectrum of anomalous dimensions and other important observables in planar N=4 supersymmetric Yang-Mills theory are encoded into a simple nonlinear Riemann-Hilbert problem: the Pμ system or quantum spectral curve. In this Letter, we extend this formulation to the N=6 supersymmetric Chern-Simons theory introduced by Aharony, Bergman, Jafferis, and Maldacena. This may be an important step towards the exact determination of the interpolating function h(λ) characterizing the integrability of this model. We also discuss a surprising relation between the quantum spectral curves for the N=4 supersymmetric Yang-Mills theory and the N=6 supersymmetric Chern-Simons theory considered here.

  3. A New Approach to the Internal Calibration of Reverberation-Mapping Spectra

    NASA Astrophysics Data System (ADS)

    Fausnaugh, M. M.

    2017-02-01

    We present a new procedure for the internal (night-to-night) calibration of timeseries spectra, with specific applications to optical AGN reverberation mapping data. The traditional calibration technique assumes that the narrow [O iii] λ5007 emission-line profile is constant in time; given a reference [O iii] λ5007 line profile, nightly spectra are aligned by fitting for a wavelength shift, a flux rescaling factor, and a change in the spectroscopic resolution. We propose the following modifications to this procedure: (1) we stipulate a constant spectral resolution for the final calibrated spectra, (2) we employ a more flexible model for changes in the spectral resolution, and (3) we use a Bayesian modeling framework to assess uncertainties in the calibration. In a test case using data for MCG+08-11-011, these modifications result in a calibration precision of ˜1 millimagnitude, which is approximately a factor of five improvement over the traditional technique. At this level, other systematic issues (e.g., the nightly sensitivity functions and Feii contamination) limit the final precision of the observed light curves. We implement this procedure as a python package (mapspec), which we make available to the community.

  4. Forming limit strains for non-linear strain path of AA6014 aluminium sheet deformed at room temperature

    NASA Astrophysics Data System (ADS)

    Bressan, José Divo; Liewald, Mathias; Drotleff, Klaus

    2017-10-01

    Forming limit strain curves of conventional aluminium alloy AA6014 sheets after loading with non-linear strain paths are presented and compared with D-Bressan macroscopic model of sheet metal rupture by critical shear stress criterion. AA6014 exhibits good formability at room temperature and, thus, is mainly employed in car body external parts by manufacturing at room temperature. According to Weber et al., experimental bi-linear strain paths were carried out in specimens with 1mm thickness by pre-stretching in uniaxial and biaxial directions up to 5%, 10% and 20% strain levels before performing Nakajima testing experiments to obtain the forming limit strain curves, FLCs. In addition, FLCs of AA6014 were predicted by employing D-Bressan critical shear stress criterion for bi-linear strain path and comparisons with the experimental FLCs were analyzed and discussed. In order to obtain the material coefficients of plastic anisotropy, strain and strain rate hardening behavior and calibrate the D-Bressan model, tensile tests, two different strain rate on specimens cut at 0°, 45° and 90° to the rolling direction and also bulge test were carried out at room temperature. The correlation of experimental bi-linear strain path FLCs is reasonably good with the predicted limit strains from D-Bressan model, assuming equivalent pre-strain calculated by Hill 1979 yield criterion.

  5. Using Machine Learning To Predict Which Light Curves Will Yield Stellar Rotation Periods

    NASA Astrophysics Data System (ADS)

    Agüeros, Marcel; Teachey, Alexander

    2018-01-01

    Using time-domain photometry to reliably measure a solar-type star's rotation period requires that its light curve have a number of favorable characteristics. The probability of recovering a period will be a non-linear function of these light curve features, which are either astrophysical in nature or set by the observations. We employ standard machine learning algorithms (artificial neural networks and random forests) to predict whether a given light curve will produce a robust rotation period measurement from its Lomb-Scargle periodogram. The algorithms are trained and validated using salient statistics extracted from both simulated light curves and their corresponding periodograms, and we apply these classifiers to the most recent Intermediate Palomar Transient Factory (iPTF) data release. With this pipeline, we anticipate measuring rotation periods for a significant fraction of the ∼4x108 stars in the iPTF footprint.

  6. The Sloan Digital Sky Survey-II: Photometry and Supernova Ia Light Curves from the 2005 Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holtzman, Jon A.; /New Mexico State U.; Marriner, John

    2010-08-26

    We present ugriz light curves for 146 spectroscopically confirmed or spectroscopically probable Type Ia supernovae from the 2005 season of the SDSS-II Supernova survey. The light curves have been constructed using a photometric technique that we call scene modeling, which is described in detail here; the major feature is that supernova brightnesses are extracted from a stack of images without spatial resampling or convolution of the image data. This procedure produces accurate photometry along with accurate estimates of the statistical uncertainty, and can be used to derive photometry taken with multiple telescopes. We discuss various tests of this technique thatmore » demonstrate its capabilities. We also describe the methodology used for the calibration of the photometry, and present calibrated magnitudes and fluxes for all of the spectroscopic SNe Ia from the 2005 season.« less

  7. Psychophysical Calibration of Mobile Touch-Screens for Vision Testing in the Field

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2015-01-01

    The now ubiquitous nature of touch-screen displays in cell phones and tablet computers makes them an attractive option for vision testing outside of the laboratory or clinic. Accurate measurement of parameters such as contrast sensitivity, however, requires precise control of absolute and relative screen luminances. The nonlinearity of the display response (gamma) can be measured or checked using a minimum motion technique similar to that developed by Anstis and Cavanagh (1983) for the determination of isoluminance. While the relative luminances of the color primaries vary between subjects (due to factors such as individual differences in pre-retinal pigment densities), the gamma nonlinearity can be checked in the lab using a photometer. Here we compare results obtained using the psychophysical method with physical measurements for a number of different devices. In addition, we present a novel physical method using the device's built-in front-facing camera in conjunction with a mirror to jointly calibrate the camera and display. A high degree of consistency between devices is found, but some departures from ideal performance are observed. In spite of this, the effects of calibration errors and display artifacts on estimates of contrast sensitivity are found to be small.

  8. ANN-based calibration model of FTIR used in transformer online monitoring

    NASA Astrophysics Data System (ADS)

    Li, Honglei; Liu, Xian-yong; Zhou, Fangjie; Tan, Kexiong

    2005-02-01

    Recently, chromatography column and gas sensor have been used in online monitoring device of dissolved gases in transformer oil. But some disadvantages still exist in these devices: consumption of carrier gas, requirement of calibration, etc. Since FTIR has high accuracy, consume no carrier gas and require no calibration, the researcher studied the application of FTIR in such monitoring device. Experiments of "Flow gas method" were designed, and spectrum of mixture composed of different gases was collected with A BOMEM MB104 FTIR Spectrometer. A key question in the application of FTIR is that: the absorbance spectrum of 3 fault key gases, including C2H4, CH4 and C2H6, are overlapped seriously at 2700~3400cm-1. Because Absorbance Law is no longer appropriate, a nonlinear calibration model based on BP ANN was setup to in the quantitative analysis. The height absorbance of C2H4, CH4 and C2H6 were adopted as quantitative feature, and all the data were normalized before training the ANN. Computing results show that the calibration model can effectively eliminate the cross disturbance to measurement.

  9. Radiometric and spectral calibrations of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) using principle component analysis

    NASA Astrophysics Data System (ADS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-10-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw GIFTS interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. The radiometric calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. The absolute radiometric performance of the instrument is affected by several factors including the FPA off-axis effect, detector/readout electronics induced nonlinearity distortions, and fore-optics offsets. The GIFTS-EDU, being the very first imaging spectrometer to use ultra-high speed electronics to readout its large area format focal plane array detectors, operating at wavelengths as large as 15 microns, possessed non-linearity's not easily removable in the initial calibration process. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts remaining after the initial radiometric calibration process, thus, further enhance the absolute calibration accuracy. This method is

  10. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    NASA Technical Reports Server (NTRS)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  11. Observation of nonlinear dissipation in piezoresistive diamond nanomechanical resonators by heterodyne down-mixing.

    PubMed

    Imboden, Matthias; Williams, Oliver A; Mohanty, Pritiraj

    2013-09-11

    We report the observation of nonlinear dissipation in diamond nanomechanical resonators measured by an ultrasensitive heterodyne down-mixing piezoresistive detection technique. The combination of a hybrid structure as well as symmetry breaking clamps enables sensitive piezoresistive detection of multiple orthogonal modes in a diamond resonator over a wide frequency and temperature range. Using this detection method, we observe the transition from purely linear dissipation at room temperature to strongly nonlinear dissipation at cryogenic temperatures. At high drive powers and below liquid nitrogen temperatures, the resonant structure dynamics follows the Pol-Duffing equation of motion. Instead of using the broadening of the full width at half-maximum, we propose a nonlinear dissipation backbone curve as a method to characterize the strength of nonlinear dissipation in devices with a nonlinear spring constant.

  12. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  13. Energy dispersive X-ray fluorescence (EDXRF) equipment calibration for multielement analysis of soil and rock samples

    NASA Astrophysics Data System (ADS)

    de Moraes, Alex Silva; Tech, Lohane; Melquíades, Fábio Luiz; Bastos, Rodrigo Oliveira

    2014-11-01

    Considering the importance to understand the behavior of the elements on different natural and/or anthropic processes, this study had as objective to verify the accuracy of a multielement analysis method for rocks characterization by using soil standards as calibration reference. An EDXRF equipment was used. The analyses were made on samples doped with known concentration of Mn, Zn, Rb, Sr and Zr, for the obtainment of the calibration curves, and on a certified rock sample to check the accuracy of the analytical curves. Then, a set of rock samples from Rio Bonito, located in Figueira city, Paraná State, Brazil, were analyzed. The concentration values obtained, in ppm, for Mn, Rb, Sr and Zr varied, respectively, from 175 to 1084, 7.4 to 268, 28 to 2247 and 15 to 761.

  14. Maximally Informative Stimuli and Tuning Curves for Sigmoidal Rate-Coding Neurons and Populations

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.; Stocks, Nigel G.

    2008-08-01

    A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannon’s mutual information and Fisher information, and the optimality of Jeffrey’s prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.

  15. From nonlinear Schrödinger hierarchy to some (2+1)-dimensional nonlinear pseudodifferential equations

    NASA Astrophysics Data System (ADS)

    Yang, Xiao; Du, Dianlou

    2010-08-01

    The Poisson structure on CN×RN is introduced to give the Hamiltonian system associated with a spectral problem which yields the nonlinear Schrödinger (NLS) hierarchy. The Hamiltonian system is proven to be Liouville integrable. Some (2+1)-dimensional equations including NLS equation, Kadomtesev-Petviashvili I (KPI) equation, coupled KPI equation, and modified Kadomtesev-Petviashvili (mKP) equation, are decomposed into Hamilton flows via the NLS hierarchy. The algebraic curve, Abel-Jacobi coordinates, and Riemann-Jacobi inversion are used to obtain the algebrogeometric solutions of these equations.

  16. A nonlinear beam model to describe the postbuckling of wide neo-Hookean beams

    NASA Astrophysics Data System (ADS)

    Lubbers, Luuk A.; van Hecke, Martin; Coulais, Corentin

    2017-09-01

    Wide beams can exhibit subcritical buckling, i.e. the slope of the force-displacement curve can become negative in the postbuckling regime. In this paper, we capture this intriguing behaviour by constructing a 1D nonlinear beam model, where the central ingredient is the nonlinearity in the stress-strain relation of the beams constitutive material. First, we present experimental and numerical evidence of a transition to subcritical buckling for wide neo-Hookean hyperelastic beams, when their width-to-length ratio exceeds a critical value of 12%. Second, we construct an effective 1D energy density by combining the Mindlin-Reissner kinematics with a nonlinearity in the stress-strain relation. Finally, we establish and solve the governing beam equations to analytically determine the slope of the force-displacement curve in the postbuckling regime. We find, without any adjustable parameters, excellent agreement between the 1D theory, experiments and simulations. Our work extends the understanding of the postbuckling of structures made of wide elastic beams and opens up avenues for the reverse-engineering of instabilities in soft and metamaterials.

  17. Lower extremity kinematics of athletics curve sprinting.

    PubMed

    Alt, Tobias; Heinrich, Kai; Funken, Johannes; Potthast, Wolfgang

    2015-01-01

    Curve running requires the generation of centripetal force altering the movement pattern in comparison to the straight path run. The question arises which kinematic modulations emerge while bend sprinting at high velocities. It has been suggested that during curve sprints the legs fulfil different functions. A three-dimensional motion analysis (16 high-speed cameras) was conducted to compare the segmental kinematics of the lower extremity during the stance phases of linear and curve sprints (radius: 36.5 m) of six sprinters of national competitive level. Peak joint angles substantially differed in the frontal and transversal plane whereas sagittal plane kinematics remained unchanged. During the prolonged left stance phase (left: 107.5 ms, right: 95.7 ms, straight: 104.4 ms) the maximum values of ankle eversion (left: 12.7°, right: 2.6°, straight: 6.6°), hip adduction (left: 13.8°, right: 5.5°, straight: 8.8°) and hip external rotation (left: 21.6°, right: 12.9°, straight: 16.7°) were significantly higher. The inside leg seemed to stabilise the movement in the frontal plane (eversion-adduction strategy) whereas the outside leg provided and controlled the motion in the horizontal plane (rotation strategy). These results extend the principal understanding of the effects of curve sprinting on lower extremity kinematics. This helps to increase the understanding of nonlinear human bipedal locomotion, which in turn might lead to improvements in athletic performance and injury prevention.

  18. System calibration method for Fourier ptychographic microscopy.

    PubMed

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. Calibration of Solar Radio Spectrometer of the Purple Mountain Observatory

    NASA Astrophysics Data System (ADS)

    Lei, LU; Si-ming, LIU; Qi-wu, SONG; Zong-jun, NING

    2015-10-01

    Calibration is a basic and important job in solar radio spectral observations. It not only deduces the solar radio flux as an important physical quantity for solar observations, but also deducts the flat field of the radio spectrometer to display the radio spectrogram clearly. In this paper, we first introduce the basic method of calibration based on the data of the solar radio spectrometer of Purple Mountain Observatory. We then analyze the variation of the calibration coefficients, and give the calibrated results for a few flares. These results are compared with those of the Nobeyama solar radio polarimeter and the hard X-ray observations of the RHESSI (Reuven Ramaty High Energy Solar Spectroscopic Imager) satellite, it is shown that these results are consistent with the characteristics of typical solar flare light curves. In particular, the analysis on the correlation between the variation of radio flux and the variation of hard X-ray flux in the pulsing phase of a flare indicates that these observations can be used to study the relevant radiation mechanism, as well as the related energy release and particle acceleration processes.

  20. Seismic rehabilitation of skewed and curved bridges using a new generation of buckling restrained braces : research brief.

    DOT National Transportation Integrated Search

    2016-12-01

    Damage to skewed and curved bridges during strong earthquakes is documented. This project investigates whether such damage could be mitigated by using buckling restrained braces. Nonlinear models show that using buckling restrained braces to mitigate...

  1. Dynamic calibration of a wheelchair dynamometer.

    PubMed

    DiGiovine, C P; Cooper, R A; Boninger, M L

    2001-01-01

    The inertia and resistance of a wheelchair dynamometer must be determined in order to compare the results of one study to another, independent of the type of device used. The purpose of this study was to describe and implement a dynamic calibration test for characterizing the electro-mechanical properties of a dynamometer. The inertia, the viscous friction, the kinetic friction, the motor back-electromotive force constant, and the motor constant were calculated using three different methods. The methodology based on a dynamic calibration test along with a nonlinear regression analysis produced the best results. The coefficient of determination comparing the dynamometer model output to the measured angular velocity and torque was 0.999 for a ramp input and 0.989 for a sinusoidal input. The inertia and resistance were determined for the rollers and the wheelchair wheels. The calculation of the electro-mechanical parameters allows for the complete description of the propulsive torque produced by an individual, given only the angular velocity and acceleration. The measurement of the electro-mechanical properties of the dynamometer as well as the wheelchair/human system provides the information necessary to simulate real-world conditions.

  2. Calibration of a conodont apatite-based Ordovician 87Sr/86Sr curve to biostratigraphy and geochronology: Implications for stratigraphic resolution

    USGS Publications Warehouse

    Saltzman, M. R.; Edwards, C. T.; Leslie, S. A.; Dwyer, Gary S.; Bauer, J. A.; Repetski, John E.; Harris, A. G.; Bergstrom, S. M.

    2014-01-01

    The Ordovician 87Sr/86Sr isotope seawater curve is well established and shows a decreasing trend until the mid-Katian. However, uncertainties in calibration of this curve to biostratigraphy and geochronology have made it difficult to determine how the rates of 87Sr/86Sr decrease may have varied, which has implications for both the stratigraphic resolution possible using Sr isotope stratigraphy and efforts to model the effects of Ordovician geologic events. We measured 87Sr/86Sr in conodont apatite in North American Ordovician sections that are well studied for conodont biostratigraphy, primarily in Nevada, Oklahoma, the Appalachian region, and Ohio Valley. Our results indicate that conodont apatite may provide an accurate medium for Sr isotope stratigraphy and strengthen previous reports that point toward a significant increase in the rate of fall in seawater 87Sr/86Sr during the Middle Ordovician Darriwilian Stage. Our 87Sr/86Sr results suggest that Sr isotope stratigraphy will be most useful as a high-resolution tool for global correlation in the mid-Darriwilian to mid-Sandbian, when the maximum rate of fall in 87Sr/86Sr is estimated at ∼5.0–10.0 × 10–5 per m.y. Variable preservation of conodont elements limits the precision for individual stratigraphic horizons. Replicate conodont analyses from the same sample differ by an average of ∼4.0 × 10–5 (the 2σ standard deviation is 6.2 × 10–5), which in the best case scenario allows for subdivision of Ordovician time intervals characterized by the highest rates of fall in 87Sr/86Sr at a maximum resolution of ∼0.5–1.0 m.y. Links between the increased rate of fall in 87Sr/86Sr beginning in the mid-late Darriwilian (Phragmodus polonicus to Pygodus serra conodont zones) and geologic events continue to be investigated, but the coincidence with a long-term rise in sea level (Sauk-Tippecanoe megasequence boundary) and tectonic events (Taconic orogeny) in North America provides a plausible

  3. A formulation of tissue- and water-equivalent materials using the stoichiometric analysis method for CT-number calibration in radiotherapy treatment planning.

    PubMed

    Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A

    2012-03-07

    Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.

  4. Developmental trajectories of adolescent popularity: a growth curve modelling analysis.

    PubMed

    Cillessen, Antonius H N; Borch, Casey

    2006-12-01

    Growth curve modelling was used to examine developmental trajectories of sociometric and perceived popularity across eight years in adolescence, and the effects of gender, overt aggression, and relational aggression on these trajectories. Participants were 303 initially popular students (167 girls, 136 boys) for whom sociometric data were available in Grades 5-12. The popularity and aggression constructs were stable but non-overlapping developmental dimensions. Growth curve models were run with SAS MIXED in the framework of the multilevel model for change [Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis. Oxford, UK: Oxford University Press]. Sociometric popularity showed a linear change trajectory; perceived popularity showed nonlinear change. Overt aggression predicted low sociometric popularity but an increase in perceived popularity in the second half of the study. Relational aggression predicted a decrease in sociometric popularity, especially for girls, and continued high-perceived popularity for both genders. The effect of relational aggression on perceived popularity was the strongest around the transition from middle to high school. The importance of growth curve models for understanding adolescent social development was discussed, as well as specific issues and challenges of growth curve analyses with sociometric data.

  5. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  6. Nonlinear deformation of composites with consideration of the effect of couple-stresses

    NASA Astrophysics Data System (ADS)

    Lagzdiņš, A.; Teters, G.; Zilaucs, A.

    1998-09-01

    Nonlinear deformation of spatially reinforced composites under active loading (without unloading) is considered. All the theoretical constructions are based on the experimental data on unidirectional and ±π/4 cross-ply epoxy plastics reinforced with glass fibers. Based on the elastic properties of the fibers and EDT-10 epoxy binder, the linear elastic characteristics of a transversely isotropic unidirectionally reinforced fiberglass plastic are found, whereas the nonlinear characteristics are obtained from experiments. For calculating the deformation properties of the ±π/4 cross-ply plastic, a refined version of the Voigt method is applied taking into account also the couple-stresses arising in the composite due to relative rotation of the reinforcement fibers. In addition, a fourth-rank damage tensor is introduced in order to account for the impact of fracture caused by the couple-stresses. The unknown constants are found from the experimental uniaxial tension curve for the cross-ply composite. The comparison between the computed curves and experimental data for other loading paths shows that the description of the nonlinear behavior of composites can be improved by considering the effect of couple-stresses generated by rotations of the reinforcing fibers.

  7. Uncertainty quantification for constitutive model calibration of brain tissue.

    PubMed

    Brewick, Patrick T; Teferra, Kirubel

    2018-05-31

    The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.

  8. Growth curves for ostriches (Struthio camelus) in a Brazilian population.

    PubMed

    Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P

    2013-01-01

    The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.

  9. Special discontinuities in nonlinearly elastic media

    NASA Astrophysics Data System (ADS)

    Chugainova, A. P.

    2017-06-01

    Solutions of a nonlinear hyperbolic system of equations describing weakly nonlinear quasitransverse waves in a weakly anisotropic elastic medium are studied. The influence of small-scale processes of dissipation and dispersion is investigated. The small-scale processes determine the structure of discontinuities (shocks) and a set of discontinuities with a stationary structure. Among the discontinuities with a stationary structure, there are special ones that, in addition to relations following from conservation laws, satisfy additional relations required for the existence of their structure. In the phase plane, the structure of such discontinuities is represented by an integral curve joining two saddles. Special discontinuities lead to nonunique self-similar solutions of the Riemann problem. Asymptotics of non-self-similar problems for equations with dissipation and dispersion are found numerically. These asymptotics correspond to self-similar solutions of the problems.

  10. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  11. Comparative study on ATR-FTIR calibration models for monitoring solution concentration in cooling crystallization

    NASA Astrophysics Data System (ADS)

    Zhang, Fangkun; Liu, Tao; Wang, Xue Z.; Liu, Jingxiang; Jiang, Xiaobin

    2017-02-01

    In this paper calibration model building based on using an ATR-FTIR spectroscopy is investigated for in-situ measurement of the solution concentration during a cooling crystallization process. The cooling crystallization of L-glutamic Acid (LGA) as a case is studied here. It was found that using the metastable zone (MSZ) data for model calibration can guarantee the prediction accuracy for monitoring the operating window of cooling crystallization, compared to the usage of undersaturated zone (USZ) spectra for model building as traditionally practiced. Calibration experiments were made for LGA solution under different concentrations. Four candidate calibration models were established using different zone data for comparison, by using a multivariate partial least-squares (PLS) regression algorithm for the collected spectra together with the corresponding temperature values. Experiments under different process conditions including the changes of solution concentration and operating temperature were conducted. The results indicate that using the MSZ spectra for model calibration can give more accurate prediction of the solution concentration during the crystallization process, while maintaining accuracy in changing the operating temperature. The primary reason of prediction error was clarified as spectral nonlinearity for in-situ measurement between USZ and MSZ. In addition, an LGA cooling crystallization experiment was performed to verify the sensitivity of these calibration models for monitoring the crystal growth process.

  12. Determining Parameters of Fractional-Exponential Heredity Kernels of Nonlinear Viscoelastic Materials

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Pavlyuk, Ya. V.; Fernati, P. V.

    2017-07-01

    The problem of determining the parameters of fractional-exponential heredity kernels of nonlinear viscoelastic materials is solved. The methods for determining the parameters that are used in the cubic theory of viscoelasticity and the nonlinear theories based on the conditions of similarity of primary creep curves and isochronous creep diagrams are analyzed. The parameters of fractional-exponential heredity kernels are determined and experimentally validated for the oriented polypropylene, FM3001 and FM10001 nylon fibers, microplastics, TC 8/3-250 glass-reinforced plastic, SWAM glass-reinforced plastic, and contact molding glass-reinforced plastic.

  13. Mixed models and reduced/selective integration displacement models for nonlinear analysis of curved beams

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1981-01-01

    Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.

  14. Calibration and validation of a general infiltration model

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  15. Results of the 1996 JPL Balloon Flight Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Weiss, R. S.

    1996-01-01

    The 1996 solar cell calibration balloon flight campaign was completed with the first flight on June 30, 1996 and a second flight on August 8, 1996. All objectives of the flight program were met. Sixty-four modules were carried to an altitude of 120,000 ft (36.6 km). Full 1-5 curves were measured on 22 of these modules, and output at a fixed load was measured on 42 modules. This data was corrected to 28 C and to 1 AU (1.496 x 10(exp 8) km). The calibrated cells have been returned to the participants and can now be used as reference standards in simulator testing of cells and arrays.

  16. Electrodynamic soil plate oscillator: Modeling nonlinear mesoscopic elastic behavior and hysteresis in nonlinear acoustic landmine detection

    NASA Astrophysics Data System (ADS)

    Korman, M. S.; Duong, D. V.; Kalsbeck, A. E.

    2015-10-01

    An apparatus (SPO), designed to study flexural vibrations of a soil loaded plate, consists of a thin circular elastic clamped plate (and cylindrical wall) supporting a vertical soil column. A small magnet attached to the center of the plate is driven by a rigid AC coil (located coaxially below the plate) to complete the electrodynamic soil plate oscillator SPO design. The frequency dependent mechanical impedance Zmech (force / particle velocity, at the plate's center) is inversely proportional to the electrical motional impedance Zmot. Measurements of Zmot are made using the complex output to input response of a Wheatstone bridge that has an identical coil element in one of its legs. Near resonance, measurements of Zmot (with no soil) before and after a slight point mass loading at the center help determine effective mass, spring, damping and coupling constant parameters of the system. "Tuning curve" behavior of real{ Zmot } and imaginary{ Zmot } at successively higher vibration amplitudes of dry sifted masonry sand are measured. They exhibit a decrease "softening" in resonance frequency along with a decrease in the quality Q factor. In soil surface vibration measurements a bilinear hysteresis model predicts the tuning curve shape for this nonlinear mesoscopic elastic SPO behavior - which also models the soil vibration over an actual plastic "inert" VS 1.6 buried landmine. Experiments are performed where a buried 1m cube concrete block supports a 12 inch deep by 30 inch by 30 inch concrete soil box for burying a VS 1.6 in dry sifted masonry sand for on-the-mine and off-the-mine soil vibration experiments. The backbone curve (a plot of the peak amplitude vs. corresponding resonant frequency from a family of tuning curves) exhibits mostly linear behavior for "on target" soil surface vibration measurements of the buried VS 1.6 or drum-like mine simulants for relatively low particle velocities of the soil. Backbone curves for "on target" measurements exhibit

  17. Prediction of nonlinear soil effects

    USGS Publications Warehouse

    Hartzell, S.; Bonilla, L.F.; Williams, R.A.

    2004-01-01

    average amplification curves from a nonlinear effective stress formulation compare favorably with observed spectral amplification at class D and E sites in the Seattle area for the 2001 Nisqually earthquake.

  18. A multimethod Global Sensitivity Analysis to aid the calibration of geomechanical models via time-lapse seismic data

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.

    2018-03-01

    Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden

  19. Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Buskirk, Caleb Griffith

    2017-06-14

    Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore,more » in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.« less

  20. Detecting influential observations in nonlinear regression modeling of groundwater flow

    USGS Publications Warehouse

    Yager, Richard M.

    1998-01-01

    Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.

  1. The plastic scintillator detector calibration circuit for DAMPE

    NASA Astrophysics Data System (ADS)

    Yang, Haibo; Kong, Jie; Zhao, Hongyun; Su, Hong

    2016-07-01

    The Dark Matter Particle Explorer (DAMPE) is being constructed as a scientific satellite to observe high energy cosmic rays in space. Plastic scintillator detector array (PSD), developed by Institute of Modern Physics, Chinese Academy of Sciences (IMPCAS), is one of the most important parts in the payload of DAMPE which is mainly used for the study of dark matter. As an anti-coincidence detector, and a charged-particle identification detector, the PSD has a total of 360 electronic readout channels, which are distributed at four sides of PSD using four identical front end electronics (FEE). Each FEE reads out 90 charge signals output by the detector. A special calibration circuit is designed in FEE. FPGA is used for on-line control, enabling the calibration circuit to generate the pulse signal with known charge. The generated signal is then sent to the FEE for calibration and self-test. This circuit mainly consists of DAC, operation amplifier, analog switch, capacitance and resistance. By using controllable step pulse, the charge can be coupled to the charge measuring chip using the small capacitance. In order to fulfill the system's objective of large dynamic range, the FEE is required to have good linearity. Thus, the charge-controllable signal is needed to do sweep test on all channels in order to obtain the non-linear parameters for off-line correction. On the other hand, the FEE will run on the satellite for three years. The changes of the operational environment and the aging of devices will lead to parameter variation of the FEE, highlighting the need for regular calibration. The calibration signal generation circuit also has a compact structure and the ability to work normally, with the PSD system's voltage resolution being higher than 0.6%.

  2. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  3. A Common Calibration Source Framework for Fully-Polarimetric and Interferometric Radiometers

    NASA Technical Reports Server (NTRS)

    Kim, Edward J.; Davis, Brynmor; Piepmeier, Jeff; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    Two types of microwave radiometry--synthetic thinned array radiometry (STAR) and fully-polarimetric (FP) radiometry--have received increasing attention during the last several years. STAR radiometers offer a technological solution to achieving high spatial resolution imaging from orbit without requiring a filled aperture or a moving antenna, and FP radiometers measure extra polarization state information upon which entirely new or more robust geophysical retrieval algorithms can be based. Radiometer configurations used for both STAR and FP instruments share one fundamental feature that distinguishes them from more 'standard' radiometers, namely, they measure correlations between pairs of microwave signals. The calibration requirements for correlation radiometers are broader than those for standard radiometers. Quantities of interest include total powers, complex correlation coefficients, various offsets, and possible nonlinearities. A candidate for an ideal calibration source would be one that injects test signals with precisely controllable correlation coefficients and absolute powers simultaneously into a pair of receivers, permitting all of these calibration quantities to be measured. The complex nature of correlation radiometer calibration, coupled with certain inherent similarities between STAR and FP instruments, suggests significant leverage in addressing both problems together. Recognizing this, a project was recently begun at NASA Goddard Space Flight Center to develop a compact low-power subsystem for spaceflight STAR or FP receiver calibration. We present a common theoretical framework for the design of signals for a controlled correlation calibration source. A statistical model is described, along with temporal and spectral constraints on such signals. Finally, a method for realizing these signals is demonstrated using a Matlab-based implementation.

  4. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

    2003-06-01

    14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

  5. Soybean Physiology Calibration in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Bilionis, I.; Constantinescu, E. M.

    2014-12-01

    With the large influence of agricultural land use on biophysical and biogeochemical cycles, integrating cultivation into Earth System Models (ESMs) is increasingly important. The Community Land Model (CLM) was augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. However, the strong nonlinearity of ESMs makes parameter fitting a difficult task. In this study, our goal is to calibrate ten of the CLM-Crop parameters for one crop type, soybean, in order to improve model projection of plant development and carbon fluxes. We used measurements of gross primary productivity, net ecosystem exchange, and plant biomass from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). Our scheme can perform model calibration using very few evaluations and, by exploiting parallelism, at a fraction of the time required by plain vanilla Markov Chain Monte Carlo (MCMC). We present the results from a twin experiment (self-validation) and calibration results and validation using real observations from an AmeriFlux tower site in the Midwestern United States, for the soybean crop type. The improved model will help researchers understand how climate affects crop production and resulting carbon fluxes, and additionally, how cultivation impacts climate.

  6. Use of Two-Part Regression Calibration Model to Correct for Measurement Error in Episodically Consumed Foods in a Single-Replicate Study Design: EPIC Case Study

    PubMed Central

    Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487

  7. A calibration method for patient specific IMRT QA using a single therapy verification film

    PubMed Central

    Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.

    2013-01-01

    Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly

  8. Gaussian decomposition of high-resolution melt curve derivatives for measuring genome-editing efficiency

    PubMed Central

    Zaboikin, Michail; Freter, Carl

    2018-01-01

    We describe a method for measuring genome editing efficiency from in silico analysis of high-resolution melt curve data. The melt curve data derived from amplicons of genome-edited or unmodified target sites were processed to remove the background fluorescent signal emanating from free fluorophore and then corrected for temperature-dependent quenching of fluorescence of double-stranded DNA-bound fluorophore. Corrected data were normalized and numerically differentiated to obtain the first derivatives of the melt curves. These were then mathematically modeled as a sum or superposition of minimal number of Gaussian components. Using Gaussian parameters determined by modeling of melt curve derivatives of unedited samples, we were able to model melt curve derivatives from genetically altered target sites where the mutant population could be accommodated using an additional Gaussian component. From this, the proportion contributed by the mutant component in the target region amplicon could be accurately determined. Mutant component computations compared well with the mutant frequency determination from next generation sequencing data. The results were also consistent with our earlier studies that used difference curve areas from high-resolution melt curves for determining the efficiency of genome-editing reagents. The advantage of the described method is that it does not require calibration curves to estimate proportion of mutants in amplicons of genome-edited target sites. PMID:29300734

  9. Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.

    PubMed

    Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H

    2014-01-01

    Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.

  10. Stationary waves on nonlinear quantum graphs. II. Application of canonical perturbation theory in basic graph structures.

    PubMed

    Gnutzmann, Sven; Waltner, Daniel

    2016-12-01

    We consider exact and asymptotic solutions of the stationary cubic nonlinear Schrödinger equation on metric graphs. We focus on some basic example graphs. The asymptotic solutions are obtained using the canonical perturbation formalism developed in our earlier paper [S. Gnutzmann and D. Waltner, Phys. Rev. E 93, 032204 (2016)2470-004510.1103/PhysRevE.93.032204]. For closed example graphs (interval, ring, star graph, tadpole graph), we calculate spectral curves and show how the description of spectra reduces to known characteristic functions of linear quantum graphs in the low-intensity limit. Analogously for open examples, we show how nonlinear scattering of stationary waves arises and how it reduces to known linear scattering amplitudes at low intensities. In the short-wavelength asymptotics we discuss how genuine nonlinear effects may be described using the leading order of canonical perturbation theory: bifurcation of spectral curves (and the corresponding solutions) in closed graphs and multistability in open graphs.

  11. Calibration of z-axis linearity for arbitrary optical topography measuring instruments

    NASA Astrophysics Data System (ADS)

    Eifler, Matthias; Seewig, Jörg; Hering, Julian; von Freymann, Georg

    2015-05-01

    The calibration of the height axis of optical topography measurement instruments is essential for reliable topography measurements. A state of the art technology for the calibration of the linearity and amplification of the z-axis is the use of step height artefacts. However, a proper calibration requires numerous step heights at different positions within the measurement range. The procedure is extensive and uses artificial surface structures that are not related to real measurement tasks. Concerning these limitations, approaches should to be developed that work for arbitrary topography measurement devices and require little effort. Hence, we propose calibration artefacts which are based on the 3D-Abbott-Curve and image desired surface characteristics. Further, real geometric structures are used as an initial point of the calibration artefact. Based on these considerations, an algorithm is introduced which transforms an arbitrary measured surface into a measurement artefact for the z-axis linearity. The method works both for profiles and topographies. For considering effects of manufacturing, measuring, and evaluation an iterative approach is chosen. The mathematical impact of these processes can be calculated with morphological signal processing. The artefact is manufactured with 3D laser lithography and characterized with different optical measurement devices. An introduced calibration routine can calibrate the entire z-axis-range within one measurement and minimizes the required effort. With the results it is possible to locate potential linearity deviations and to adjust the z-axis. Results of different optical measurement principles are compared in order to evaluate the capabilities of the new artefact.

  12. Importance of Calibration Method in Central Blood Pressure for Cardiac Structural Abnormalities.

    PubMed

    Negishi, Kazuaki; Yang, Hong; Wang, Ying; Nolan, Mark T; Negishi, Tomoko; Pathan, Faraz; Marwick, Thomas H; Sharman, James E

    2016-09-01

    Central blood pressure (CBP) independently predicts cardiovascular risk, but calibration methods may affect accuracy of central systolic blood pressure (CSBP). Standard central systolic blood pressure (Stan-CSBP) from peripheral waveforms is usually derived with calibration using brachial SBP and diastolic BP (DBP). However, calibration using oscillometric mean arterial pressure (MAP) and DBP (MAP-CSBP) is purported to provide more accurate representation of true invasive CSBP. This study sought to determine which derived CSBP could more accurately discriminate cardiac structural abnormalities. A total of 349 community-based patients with risk factors (71±5years, 161 males) had CSBP measured by brachial oscillometry (Mobil-O-Graph, IEM GmbH, Stolberg, Germany) using 2 calibration methods: MAP-CSBP and Stan-CSBP. Left ventricular hypertrophy (LVH) and left atrial dilatation (LAD) were measured based on standard guidelines. MAP-CSBP was higher than Stan-CSBP (149±20 vs. 128±15mm Hg, P < 0.0001). Although they were modestly correlated (rho = 0.74, P < 0.001), the Bland-Altman plot demonstrated a large bias (21mm Hg) and limits of agreement (24mm Hg). In receiver operating characteristic (ROC) curve analyses, MAP-CSBP significantly better discriminated LVH compared with Stan-CSBP (area under the curve (AUC) 0.66 vs. 0.59, P = 0.0063) and brachial SBP (0.62, P = 0.027). Continuous net reclassification improvement (NRI) (P < 0.001) and integrated discrimination improvement (IDI) (P < 0.001) corroborated superior discrimination of LVH by MAP-CSBP. Similarly, MAP-CSBP better distinguished LAD than Stan-CSBP (AUC 0.63 vs. 0.56, P = 0.005) and conventional brachial SBP (0.58, P = 0.006), whereas Stan-CSBP provided no better discrimination than conventional brachial BP (P = 0.09). CSBP is calibration dependent and when oscillometric MAP and DBP are used, the derived CSBP is a better discriminator for cardiac structural abnormalities. © American Journal of Hypertension

  13. Piezoelectric trace vapor calibrator

    NASA Astrophysics Data System (ADS)

    Verkouteren, R. Michael; Gillen, Greg; Taylor, David W.

    2006-08-01

    The design and performance of a vapor generator for calibration and testing of trace chemical sensors are described. The device utilizes piezoelectric ink-jet nozzles to dispense and vaporize precisely known amounts of analyte solutions as monodisperse droplets onto a hot ceramic surface, where the generated vapors are mixed with air before exiting the device. Injected droplets are monitored by microscope with strobed illumination, and the reproducibility of droplet volumes is optimized by adjustment of piezoelectric wave form parameters. Complete vaporization of the droplets occurs only across a 10°C window within the transition boiling regime of the solvent, and the minimum and maximum rates of trace analyte that may be injected and evaporated are determined by thermodynamic principles and empirical observations of droplet formation and stability. By varying solution concentrations, droplet injection rates, air flow, and the number of active nozzles, the system is designed to deliver—on demand—continuous vapor concentrations across more than six orders of magnitude (nominally 290fg/lto1.05μg/l). Vapor pulses containing femtogram to microgram quantities of analyte may also be generated. Calibrated ranges of three explosive vapors at ng/l levels were generated by the device and directly measured by ion mobility spectrometry (IMS). These data demonstrate expected linear trends within the limited working range of the IMS detector and also exhibit subtle nonlinear behavior from the IMS measurement process.

  14. Wavelength calibration with PMAS at 3.5 m Calar Alto Telescope using a tunable astro-comb

    NASA Astrophysics Data System (ADS)

    Chavez Boggio, J. M.; Fremberg, T.; Bodenmüller, D.; Sandin, C.; Zajnulina, M.; Kelz, A.; Giannone, D.; Rutowska, M.; Moralejo, B.; Roth, M. M.; Wysmolek, M.; Sayinc, H.

    2018-05-01

    On-sky tests conducted with an astro-comb using the Potsdam Multi-Aperture Spectrograph (PMAS) at the 3.5 m Calar Alto Telescope are reported. The proposed astro-comb approach is based on cascaded four-wave mixing between two lasers propagating through dispersion optimized nonlinear fibers. This approach allows for a line spacing that can be continuously tuned over a broad range (from tens of GHz to beyond 1 THz) making it suitable for calibration of low- medium- and high-resolution spectrographs. The astro-comb provides 300 calibration lines and his line-spacing is tracked with a wavemeter having 0.3 pm absolute accuracy. First, we assess the accuracy of Neon calibration by measuring the astro-comb lines with (Neon calibrated) PMAS. The results are compared with expected line positions from wavemeter measurement showing an offset of ∼5-20 pm (4%-16% of one resolution element). This might be the footprint of the accuracy limits from actual Neon calibration. Then, the astro-comb performance as a calibrator is assessed through measurements of the Ca triplet from stellar objects HD3765 and HD219538 as well as with the sky line spectrum, showing the advantage of the proposed astro-comb for wavelength calibration at any resolution.

  15. The simple procedure for the fluxgate magnetometers calibration

    NASA Astrophysics Data System (ADS)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the

  16. Colorimetric calibration of wound photography with off-the-shelf devices

    NASA Astrophysics Data System (ADS)

    Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.

    2017-03-01

    Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.

  17. Response analysis of curved bridge with unseating failure control system under near-fault ground motions

    NASA Astrophysics Data System (ADS)

    Zuo, Ye; Sun, Guangjun; Li, Hongjing

    2018-01-01

    Under the action of near-fault ground motions, curved bridges are prone to pounding, local damage of bridge components and even unseating. A multi-scale fine finite element model of a typical three-span curved bridge is established by considering the elastic-plastic behavior of piers and pounding effect of adjacent girders. The nonlinear time-history method is used to study the seismic response of the curved bridge equipped with unseating failure control system under the action of near-fault ground motion. An in-depth analysis is carried to evaluate the control effect of the proposed unseating failure control system. The research results indicate that under the near-fault ground motion, the seismic response of the curved bridge is strong. The unseating failure control system perform effectively to reduce the pounding force of the adjacent girders and the probability of deck unseating.

  18. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner

    PubMed Central

    McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.; Van Lysel, Michael S.

    2011-01-01

    Purpose: In this study, newly formulated XR-RV3 GafChromic® film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was ±7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film

  19. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner.

    PubMed

    McCabe, Bradley P; Speidel, Michael A; Pike, Tina L; Van Lysel, Michael S

    2011-04-01

    In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was +/- 7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of

  20. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time

  1. Unidirectional growth, rocking curve, linear and nonlinear optical properties of LPHCl single crystals

    NASA Astrophysics Data System (ADS)

    Kumar, P. Ramesh; Gunaseelan, R.; Raj, A. Antony; Selvakumar, S.; Sagayaraj, P.

    2012-06-01

    Nonlinear optical amino-acid single crystal of L-phenylalanine hydrochloride (LPHCl) was successfully grown by unidirectional Sankaranarayanan-Ramasamy (SR) method under ambient conditions for the first time. The grown single crystal was subjected to different characterization analyses in order to find out its suitability for device fabrication. The crystalline perfection was evaluated using high-resolution X-ray diffractometry. It is evident from the optical absorption study that crystal has excellent transmission in the entire visible region with its lower cut off wavelength around 290 nm.

  2. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Astrophysics Data System (ADS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-03-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  3. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-01-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  4. Radiometric calibration of hyper-spectral imaging spectrometer based on optimizing multi-spectral band selection

    NASA Astrophysics Data System (ADS)

    Sun, Li-wei; Ye, Xin; Fang, Wei; He, Zhen-lei; Yi, Xiao-long; Wang, Yu-peng

    2017-11-01

    Hyper-spectral imaging spectrometer has high spatial and spectral resolution. Its radiometric calibration needs the knowledge of the sources used with high spectral resolution. In order to satisfy the requirement of source, an on-orbit radiometric calibration method is designed in this paper. This chain is based on the spectral inversion accuracy of the calibration light source. We compile the genetic algorithm progress which is used to optimize the channel design of the transfer radiometer and consider the degradation of the halogen lamp, thus realizing the high accuracy inversion of spectral curve in the whole working time. The experimental results show the average root mean squared error is 0.396%, the maximum root mean squared error is 0.448%, and the relative errors at all wavelengths are within 1% in the spectral range from 500 nm to 900 nm during 100 h operating time. The design lays a foundation for the high accuracy calibration of imaging spectrometer.

  5. Medical color displays and their color calibration: investigations of various calibration methods, tools, and potential improvement in color difference ΔE

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Hashmi, Syed F.; Dallas, William J.; Krupinski, Elizabeth A.; Rehm, Kelly; Fan, Jiahua

    2010-08-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ΔE = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target. As an extension of this fundamental work1, we further improved our calibration method by defining concrete calibration parameters for the display, using the NEC wide gamut puck, and making sure

  6. Non-isothermal elastoviscoplastic analysis of planar curved beams

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Carlson, R. L.; Riff, R.

    1988-01-01

    The development of a general mathematical model and solution methodologies, to examine the behavior of thin structural elements such as beams, rings, and arches, subjected to large nonisothermal elastoviscoplastic deformations is presented. Thus, geometric as well as material type nonlinearities of higher order are present in the analysis. For this purpose a complete true abinito rate theory of kinematics and kinetics for thin bodies, without any restriction on the magnitude of the transformation is presented. A previously formulated elasto-thermo-viscoplastic material constitutive law is employed in the analysis. The methodology is demonstrated through three different straight and curved beams problems.

  7. A non-linear steady state characteristic performance curve for medium temperature solar energy collectors

    NASA Astrophysics Data System (ADS)

    Eames, P. C.; Norton, B.

    A numerical simulation model was employed to investigate the effects of ambient temperature and insolation on the efficiency of compound parabolic concentrating solar energy collectors. The limitations of presently-used collector performance characterization curves were investigated and a new approach proposed.

  8. Data analysis and calibration for a bulk-refractive-index-compensated surface plasmon resonance affinity sensor

    NASA Astrophysics Data System (ADS)

    Chinowsky, Timothy M.; Yee, Sinclair S.

    2002-02-01

    Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.

  9. Results of the 1999 JPL Balloon Flight Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Mueller, R. L.; Weiss, R. S.

    2000-01-01

    The 1999 solar cell calibration balloon flight campaign consisted of two flights, which occurred on June 14, 1999, and July 6, 1999. All objectives of the flight program were met. Fifty-seven modules were carried to an altitude of approximately equal to 120,000 ft (36.6 km). Full I-V curves were measured on five of these modules, and output at a fixed load was measured on forty-three modules (forty-five cells), with some modules repeated on the second flight. This data was corrected to 28 C and to 1 AU (1.496 x 10 (exp 8) km). The calibrated cells have been returned to their owners and can now be used as reference standards in simulator testing of cells and arrays.

  10. Fractional differential equations based modeling of microbial survival and growth curves: model development and experimental validation.

    PubMed

    Kaur, A; Takhar, P S; Smith, D M; Mann, J E; Brashears, M M

    2008-10-01

    A fractional differential equations (FDEs)-based theory involving 1- and 2-term equations was developed to predict the nonlinear survival and growth curves of foodborne pathogens. It is interesting to note that the solution of 1-term FDE leads to the Weibull model. Nonlinear regression (Gauss-Newton method) was performed to calculate the parameters of the 1-term and 2-term FDEs. The experimental inactivation data of Salmonella cocktail in ground turkey breast, ground turkey thigh, and pork shoulder; and cocktail of Salmonella, E. coli, and Listeria monocytogenes in ground beef exposed at isothermal cooking conditions of 50 to 66 degrees C were used for validation. To evaluate the performance of 2-term FDE in predicting the growth curves-growth of Salmonella typhimurium, Salmonella Enteritidis, and background flora in ground pork and boneless pork chops; and E. coli O157:H7 in ground beef in the temperature range of 22.2 to 4.4 degrees C were chosen. A program was written in Matlab to predict the model parameters and survival and growth curves. Two-term FDE was more successful in describing the complex shapes of microbial survival and growth curves as compared to the linear and Weibull models. Predicted curves of 2-term FDE had higher magnitudes of R(2) (0.89 to 0.99) and lower magnitudes of root mean square error (0.0182 to 0.5461) for all experimental cases in comparison to the linear and Weibull models. This model was capable of predicting the tails in survival curves, which was not possible using Weibull and linear models. The developed model can be used for other foodborne pathogens in a variety of food products to study the destruction and growth behavior.

  11. Bayesian model calibration of ramp compression experiments on Z

    NASA Astrophysics Data System (ADS)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Spectro-spatial analysis of wave packet propagation in nonlinear acoustic metamaterials

    NASA Astrophysics Data System (ADS)

    Zhou, W. J.; Li, X. P.; Wang, Y. S.; Chen, W. Q.; Huang, G. L.

    2018-01-01

    The objective of this work is to analyze wave packet propagation in weakly nonlinear acoustic metamaterials and reveal the interior nonlinear wave mechanism through spectro-spatial analysis. The spectro-spatial analysis is based on full-scale transient analysis of the finite system, by which dispersion curves are generated from the transmitted waves and also verified by the perturbation method (the L-P method). We found that the spectro-spatial analysis can provide detailed information about the solitary wave in short-wavelength region which cannot be captured by the L-P method. It is also found that the optical wave modes in the nonlinear metamaterial are sensitive to the parameters of the nonlinear constitutive relation. Specifically, a significant frequency shift phenomenon is found in the middle-wavelength region of the optical wave branch, which makes this frequency region behave like a band gap for transient waves. This special frequency shift is then used to design a direction-biased waveguide device, and its efficiency is shown by numerical simulations.

  13. Dose Calibration of the ISS-RAD Fast Neutron Detector

    NASA Technical Reports Server (NTRS)

    Zeitlin, C.

    2015-01-01

    The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.

  14. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    PubMed

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  15. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  16. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  17. Long-Term Stability Assessment of Sonoran Desert for Vicarious Calibration of GOES-R

    NASA Astrophysics Data System (ADS)

    Kim, W.; Liang, S.; Cao, C.

    2012-12-01

    Vicarious calibration refers to calibration techniques that do not depend on onboard calibration devices. Although sensors and onboard calibration devices undergo rigorous validation processes before launch, performance of sensors often degrades after the launch due to exposure to the harsh space environment and the aging of devices. Such in-flight changes of devices can be identified and adjusted through vicarious calibration activities where the sensor degradation is measured in reference to exterior calibration sources such as the Sun, the Moon, and the Earth surface. Sonoran desert is one of the best calibration sites located in the North America that are available for vicarious calibration of GOES-R satellite. To accurately calibrate sensors onboard GOES-R satellite (e.g. advanced baseline imager (ABI)), the temporal stability of Sonoran desert needs to be assessed precisely. However, short-/mid-term variations in top-of-atmosphere (TOA) reflectance caused by meteorological variables such as water vapor amount and aerosol loading are often difficult to retrieve, making the use of TOA reflectance time series for the stability assessment of the site. In this paper, we address this issue of normalization of TOA reflectance time series using a time series analysis algorithm - seasonal trend decomposition procedure based on LOESS (STL) (Cleveland et al, 1990). The algorithm is basically a collection of smoothing filters which leads to decomposition of a time series into three additive components; seasonal, trend, and remainder. Since this non-linear technique is capable of extracting seasonal patterns in the presence of trend changes, the seasonal variation can be effectively identified in the time series of remote sensing data subject to various environmental changes. The experiment results performed with Landsat 5 TM data show that the decomposition results acquired for the Sonoran Desert area produce normalized series that have much less uncertainty than those

  18. Camera Calibration with Radial Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  19. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method

    NASA Astrophysics Data System (ADS)

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the

  20. Linear Calibration of Radiographic Mineral Density Using Video-Digitizing Methods

    NASA Technical Reports Server (NTRS)

    Martin, R. Bruce; Papamichos, Thomas; Dannucci, Greg A.

    1990-01-01

    Radiographic images can provide quantitative as well as qualitative information if they are subjected to densitometric analysis. Using modem video-digitizing techniques, such densitometry can be readily accomplished using relatively inexpensive computer systems. However, such analyses are made more difficult by the fact that the density values read from the radiograph have a complex, nonlinear relationship to bone mineral content. This article derives the relationship between these variables from the nature of the intermediate physical processes, and presents a simple mathematical method for obtaining a linear calibration function using a step wedge or other standard.

  1. Linear Calibration of Radiographic Mineral Density Using Video-Digitizing Methods

    NASA Technical Reports Server (NTRS)

    Martin, R. Bruce; Papamichos, Thomas; Dannucci, Greg A.

    1990-01-01

    Radiographic images can provide quantitative as well as qualitative information if they are subjected to densitometric analysis. Using modern video-digitizing techniques, such densitometry can be readily accomplished using relatively inexpensive computer systems. However, such analyses are made more difficult by the fact that the density values read from the radiograph have a complex, nonlinear relationship to bone mineral content. This article derives the relationship between these variables from the nature of the intermediate physical processes, and presents a simple mathematical method for obtaining a linear calibration function using a step wedge or other standard.

  2. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method

    NASA Astrophysics Data System (ADS)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.

  3. Noise-shaping gradient descent-based online adaptation algorithms for digital calibration of analog circuits.

    PubMed

    Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji

    2013-04-01

    Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.

  4. Nonlinear acoustic landmine detection: Comparison of ``off target'' soil background and ``on target'' soil-mine nonlinear effects

    NASA Astrophysics Data System (ADS)

    Korman, Murray S.

    2005-09-01

    When airborne sound at two primary tones, f1, f2 (closely spaced near a resonance) excites the soil surface over a buried landmine, soil wave motion interacts with the landmine generating a scattered surface profile which can be measured over the ``target.'' Profiles at f1, f2, and f1-(f2-f1), f2+(f2-f1), 2f1-(f2-f1), f1+f2 and 2f2+(f2-f1) (among others) are measured for a VS 1.6 plastic, inert, anti-tank landmine, buried at 3.6 cm in sifted loess soil. It is observed that the ``on target'' to ``off target'' contrast ratio for the sum frequency component can be ~20 dB higher than for either primary. The vibration interaction between the top-plate interface of a buried plastic landmine and the soil above it appears to exhibit many characteristics of the mesoscopic/nanoscale nonlinear effects that are observed in geomaterials like sandstone. Near resonance, the bending (softening) of a family of increasing amplitude tuning curves, involving the vibration over the landmine, exhibits a linear relationship between the peak particle velocity and corresponding frequency. Tuning curve experiments along with two-tone tests are performed both on and off the mine in an effort to understand the nonlinearities in each case. [Work supported by U.S. Army RDECOM CERDEC, NVESD.

  5. The Calibration of the Slotted Section for Precision Microwave Measurements

    DTIC Science & Technology

    1952-03-01

    Calibration Curve for lossless Structures B. The Correction Relations for Dis’sipative Structures C The Effect of an Error in the Variable Short...a’discussipn of protoe effects ? and a methpd of correction? for large insertion depths are given in the literature-* xhrs. reppirt is _ cpnceraed...solely with error source fcp)v *w w«v 3Jhe: presence of the slot in the slptted section Intro dub« effects ? fa)" the slot, loads the vmyeguide

  6. Research on Nonlinear Time Series Forecasting of Time-Delay NN Embedded with Bayesian Regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Weijin; Xu, Yusheng; Xu, Yuhui; Wang, Jianmin

    Based on the idea of nonlinear prediction of phase space reconstruction, this paper presented a time delay BP neural network model, whose generalization capability was improved by Bayesian regularization. Furthermore, the model is applied to forecast the imp&exp trades in one industry. The results showed that the improved model has excellent generalization capabilities, which not only learned the historical curve, but efficiently predicted the trend of business. Comparing with common evaluation of forecasts, we put on a conclusion that nonlinear forecast can not only focus on data combination and precision improvement, it also can vividly reflect the nonlinear characteristic of the forecasting system. While analyzing the forecasting precision of the model, we give a model judgment by calculating the nonlinear characteristic value of the combined serial and original serial, proved that the forecasting model can reasonably 'catch' the dynamic characteristic of the nonlinear system which produced the origin serial.

  7. Social Contagion, Adolescent Sexual Behavior, and Pregnancy: A Nonlinear Dynamic EMOSA Model.

    ERIC Educational Resources Information Center

    Rodgers, Joseph Lee; Rowe, David C.; Buster, Maury

    1998-01-01

    Expands an existing nonlinear dynamic epidemic model of onset of social activities (EMOSA), motivated by social contagion theory, to quantify the likelihood of pregnancy for adolescent girls of different sexuality statuses. Compares five sexuality/pregnancy models to explain variance in national prevalence curves. Finds that adolescent girls have…

  8. When high working memory capacity is and is not beneficial for predicting nonlinear processes.

    PubMed

    Fischer, Helen; Holt, Daniel V

    2017-04-01

    Predicting the development of dynamic processes is vital in many areas of life. Previous findings are inconclusive as to whether higher working memory capacity (WMC) is always associated with using more accurate prediction strategies, or whether higher WMC can also be associated with using overly complex strategies that do not improve accuracy. In this study, participants predicted a range of systematically varied nonlinear processes based on exponential functions where prediction accuracy could or could not be enhanced using well-calibrated rules. Results indicate that higher WMC participants seem to rely more on well-calibrated strategies, leading to more accurate predictions for processes with highly nonlinear trajectories in the prediction region. Predictions of lower WMC participants, in contrast, point toward an increased use of simple exemplar-based prediction strategies, which perform just as well as more complex strategies when the prediction region is approximately linear. These results imply that with respect to predicting dynamic processes, working memory capacity limits are not generally a strength or a weakness, but that this depends on the process to be predicted.

  9. A nonlinear model for analysis of slug-test data

    USGS Publications Warehouse

    McElwee, C.D.; Zenner, M.A.

    1998-01-01

    While doing slug tests in high-permeability aquifers, we have consistently seen deviations from the expected response of linear theoretical models. Normalized curves do not coincide for various initial heads, as would be predicted by linear theories, and are shifted to larger times for higher initial heads. We have developed a general nonlinear model based on the Navier-Stokes equation, nonlinear frictional loss, non-Darcian flow, acceleration effects, radius changes in the well bore, and a Hvorslev model for the aquifer, which explains these data features. The model produces a very good fit for both oscillatory and nonoscillatory field data, using a single set of physical parameters to predict the field data for various initial displacements at a given well. This is in contrast to linear models which have a systematic lack of fit and indicate that hydraulic conductivity varies with the initial displacement. We recommend multiple slug tests with a considerable variation in initial head displacement to evaluate the possible presence of nonlinear effects. Our conclusion is that the nonlinear model presented here is an excellent tool to analyze slug tests, covering the range from the underdamped region to the overdamped region.

  10. Reconstruction of Complex Directional Networks with Group Lasso Nonlinear Conditional Granger Causality.

    PubMed

    Yang, Guanxue; Wang, Lin; Wang, Xiaofan

    2017-06-07

    Reconstruction of networks underlying complex systems is one of the most crucial problems in many areas of engineering and science. In this paper, rather than identifying parameters of complex systems governed by pre-defined models or taking some polynomial and rational functions as a prior information for subsequent model selection, we put forward a general framework for nonlinear causal network reconstruction from time-series with limited observations. With obtaining multi-source datasets based on the data-fusion strategy, we propose a novel method to handle nonlinearity and directionality of complex networked systems, namely group lasso nonlinear conditional granger causality. Specially, our method can exploit different sets of radial basis functions to approximate the nonlinear interactions between each pair of nodes and integrate sparsity into grouped variables selection. The performance characteristic of our approach is firstly assessed with two types of simulated datasets from nonlinear vector autoregressive model and nonlinear dynamic models, and then verified based on the benchmark datasets from DREAM3 Challenge4. Effects of data size and noise intensity are also discussed. All of the results demonstrate that the proposed method performs better in terms of higher area under precision-recall curve.

  11. Nonlinear Pattern Selection in Bi-Modal Interfacial Instabilities

    NASA Astrophysics Data System (ADS)

    Picardo, Jason; Narayanan, Ranga

    2016-11-01

    We study the evolution of two interacting unstable interfaces, with the aim of understanding the role of non-linearity in pattern selection. Specifically, we consider two superposed thin films on a heated surface, that are susceptible to thermocapillary and Rayleigh-Taylor instabilities. Due to the presence of two unstable interfaces, the dispersion curve (linear growth rate plotted as a function of the perturbation wavelength) exhibits two peaks. If these peaks have equal heights, then the two corresponding disturbance patterns will grow with the same linear growth rate. Therefore, any selection between the two must occur via nonlinear effects. The two-interface problem under consideration provides a variety of such bi-modal situations, in which the role of nonlinearity in pattern selection is unveiled. We use a combination of long wave asymptotics, numerical simulations and amplitude expansions to understand the subtle nonlinear interactions between the two peak modes. Our results offer a counter-example to Rayleigh's principle of pattern formation, that the fastest growing linear mode will dominate the final pattern. Far from being governed by any such general dogma, the final selected pattern varies considerably from case to case. The authors acknowledge funding from NSF (0968313) and the Fulbright-Nehru fellowship.

  12. Articulated Arm Coordinate Measuring Machine Calibration by Laser Tracker Multilateration

    PubMed Central

    Majarena, Ana C.; Brau, Agustín; Velázquez, Jesús

    2014-01-01

    A new procedure for the calibration of an articulated arm coordinate measuring machine (AACMM) is presented in this paper. First, a self-calibration algorithm of four laser trackers (LTs) is developed. The spatial localization of a retroreflector target, placed in different positions within the workspace, is determined by means of a geometric multilateration system constructed from the four LTs. Next, a nonlinear optimization algorithm for the identification procedure of the AACMM is explained. An objective function based on Euclidean distances and standard deviations is developed. This function is obtained from the captured nominal data (given by the LTs used as a gauge instrument) and the data obtained by the AACMM and compares the measured and calculated coordinates of the target to obtain the identified model parameters that minimize this difference. Finally, results show that the procedure presented, using the measurements of the LTs as a gauge instrument, is very effective by improving the AACMM precision. PMID:24688418

  13. Calibration Uncertainties in the Droplet Measurement Technologies Cloud Condensation Nuclei Counter

    NASA Astrophysics Data System (ADS)

    Hibert, Kurt James

    average surface pressure at Grand Forks, ND. The supersaturation calibration uncertainty is 2.3, 3.1, and 4.4 % for calibrations done at 700, 840, and 980 hPa respectively. The supersaturation calibration change with pressure is on average 0.047 % supersaturation per 100 hPa. The supersaturation calibrations done at UND are 42-45 % lower than supersaturation calibrations done at DMT approximately 1 year previously. Performance checks confirmed that all major leaks developed during shipping were fixed before conducting the supersaturation calibrations. Multiply-charged particles passing through the Electrostatic Classifier may have influenced DMT's activation curves, which is likely part of the supersaturation calibration difference. Furthermore, the fitting method used to calculate the activation size and the limited calibration points are likely significant sources of error in DMT's supersaturation calibration. While the DMT CCN counter's calibration uncertainties are relatively small, and the pressure dependence is easily accounted for, the calibration methodology used by different groups can be very important. The insights gained from the careful calibration of the DMT CCN counter indicate that calibration of scientific instruments using complex methodology is not trivial.

  14. Optimal Energy Measurement in Nonlinear Systems: An Application of Differential Geometry

    NASA Technical Reports Server (NTRS)

    Fixsen, Dale J.; Moseley, S. H.; Gerrits, T.; Lita, A.; Nam, S. W.

    2014-01-01

    Design of TES microcalorimeters requires a tradeoff between resolution and dynamic range. Often, experimenters will require linearity for the highest energy signals, which requires additional heat capacity be added to the detector. This results in a reduction of low energy resolution in the detector. We derive and demonstrate an algorithm that allows operation far into the nonlinear regime with little loss in spectral resolution. We use a least squares optimal filter that varies with photon energy to accommodate the nonlinearity of the detector and the non-stationarity of the noise. The fitting process we use can be seen as an application of differential geometry. This recognition provides a set of well-developed tools to extend our work to more complex situations. The proper calibration of a nonlinear microcalorimeter requires a source with densely spaced narrow lines. A pulsed laser multi-photon source is used here, and is seen to be a powerful tool for allowing us to develop practical systems with significant detector nonlinearity. The combination of our analysis techniques and the multi-photon laser source create a powerful tool for increasing the performance of future TES microcalorimeters.

  15. Nonlinear dynamics of motor learning.

    PubMed

    Mayer-Kress, Gottfried; Newell, Karl M; Liu, Yeou-Teh

    2009-01-01

    In this paper we review recent work from our studies of a nonlinear dynamics of motor learning that is grounded in the construct of an evolving attractor landscape. With the assumption that learning is goal-directed, we can quantify the observed performance as a score or measure of the distance to the learning goal. The structure of the dynamics of how the goal is approached has been traditionally studied through an analysis of learning curves. Recent years have seen a gradual paradigm shift from a 'universal power law of practice' to an analysis of performance dynamics that reveals multiple processes that include adaption and learning as well as changes in performance due to factors such as fatigue. Evidence has also been found for nonlinear phenomena such as bifurcations, hysteresis and even a form of self-organized criticality. Finally, we present a quantitative measure for the dual concepts of skill and difficulty that allows us to unfold a learning process in order to study universal properties of learning transitions.

  16. Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam

    2018-04-01

    Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.

  17. Design of multiplex calibrant plasmids, their use in GMO detection and the limit of their applicability for quantitative purposes owing to competition effects.

    PubMed

    Debode, Frédéric; Marien, Aline; Janssen, Eric; Berben, Gilbert

    2010-03-01

    Five double-target multiplex plasmids to be used as calibrants for GMO quantification were constructed. They were composed of two modified targets associated in tandem in the same plasmid: (1) a part of the soybean lectin gene and (2) a part of the transgenic construction of the GTS40-3-2 event. Modifications were performed in such a way that each target could be amplified with the same primers as those for the original target from which they were derived but such that each was specifically detected with an appropriate probe. Sequence modifications were done to keep the parameters of the new target as similar as possible to those of its original sequence. The plasmids were designed to be used either in separate reactions or in multiplex reactions. Evidence is given that with each of the five different plasmids used in separate wells as a calibrant for a different copy number, a calibration curve can be built. When the targets were amplified together (in multiplex) and at different concentrations inside the same well, the calibration curves showed that there was a competition effect between the targets and this limits the range of copy numbers for calibration over a maximum of 2 orders of magnitude. Another possible application of multiplex plasmids is discussed.

  18. Generic element processor (application to nonlinear analysis)

    NASA Technical Reports Server (NTRS)

    Stanley, Gary

    1989-01-01

    The focus here is on one aspect of the Computational Structural Mechanics (CSM) Testbed: finite element technology. The approach involves a Generic Element Processor: a command-driven, database-oriented software shell that facilitates introduction of new elements into the testbed. This shell features an element-independent corotational capability that upgrades linear elements to geometrically nonlinear analysis, and corrects the rigid-body errors that plague many contemporary plate and shell elements. Specific elements that have been implemented in the Testbed via this mechanism include the Assumed Natural-Coordinate Strain (ANS) shell elements, developed with Professor K. C. Park (University of Colorado, Boulder), a new class of curved hybrid shell elements, developed by Dr. David Kang of LPARL (formerly a student of Professor T. Pian), other shell and solid hybrid elements developed by NASA personnel, and recently a repackaged version of the workhorse shell element used in the traditional STAGS nonlinear shell analysis code. The presentation covers: (1) user and developer interfaces to the generic element processor, (2) an explanation of the built-in corotational option, (3) a description of some of the shell-elements currently implemented, and (4) application to sample nonlinear shell postbuckling problems.

  19. Film calibration for soft x-ray wavelengths

    NASA Astrophysics Data System (ADS)

    Tallents, Gregory J.; Krishnan, J.; Dwivedi, L.; Neely, David; Turcu, I. C. Edmond

    1997-10-01

    The response of photographic film to X-rays from laser- plasma is of practical interest. Film is often used for the ultimate detection of x-rays in crystal and grating spectrometers and in imaging instruments such as pinhole cameras largely because of its high spatial resolution (approximately 1 - 10 microns). Characteristic curves for wavelengths--3 nm and 23 nm are presented for eight x-ray films (Kodak 101-01, 101-07, 104-02, Kodak Industrex CX, Russian UF-SH4, UF-VR2, Ilford Q plates and Shanghai 5F film). The calibrations were obtained from the emission of laser-produced carbon plasmas and a Ne-like Ge X-ray laser.

  20. Waveguide Calibrator for Multi-Element Probe Calibration

    NASA Technical Reports Server (NTRS)

    Sommerfeldt, Scott D.; Blotter, Jonathan D.

    2007-01-01

    A calibrator, referred to as the spider design, can be used to calibrate probes incorporating multiple acoustic sensing elements. The application is an acoustic energy density probe, although the calibrator can be used for other types of acoustic probes. The calibrator relies on the use of acoustic waveguide technology to produce the same acoustic field at each of the sensing elements. As a result, the sensing elements can be separated from each other, but still calibrated through use of the acoustic waveguides. Standard calibration techniques involve placement of an individual microphone into a small cavity with a known, uniform pressure to perform the calibration. If a cavity is manufactured with sufficient size to insert the energy density probe, it has been found that a uniform pressure field can only be created at very low frequencies, due to the size of the probe. The size of the energy density probe prevents one from having the same pressure at each microphone in a cavity, due to the wave effects. The "spider" design probe is effective in calibrating multiple microphones separated from each other. The spider design ensures that the same wave effects exist for each microphone, each with an indivdual sound path. The calibrator s speaker is mounted at one end of a 14-cm-long and 4.1-cm diameter small plane-wave tube. This length was chosen so that the first evanescent cross mode of the plane-wave tube would be attenuated by about 90 dB, thus leaving just the plane wave at the termination plane of the tube. The tube terminates with a small, acrylic plate with five holes placed symmetrically about the axis of the speaker. Four ports are included for the four microphones on the probe. The fifth port is included for the pre-calibrated reference microphone. The ports in the acrylic plate are in turn connected to the probe sensing elements via flexible PVC tubes. These five tubes are the same length, so the acoustic wave effects are the same in each tube. The

  1. Results of the 2001 JPL Balloon Flight Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Mueller, R. L.

    2002-01-01

    The 2001 solar cell calibration balloon flight campaign consisted of two flights, which occurred on June 26, 2001, and July 4, 2001. Fifty-nine modules were carried to an altitude of approximately 120,000 ft (36.6 km). Full I-V curves were measured on nineteen of these modules, and output at a fixed load was measured on thirty-two modules (forty-six cells), with some modules repeated on the second flight. Nine modules were flown for temperature measurement only. The data from the fixed load cells on the first flight was not usable. The temperature dependence of the first-flight data was erratic and we were unable to find a way to extract accurate calibration values. The I-V data from the first flight was good, however, and all data from the second flight was also good. The data was corrected to 28 C and to 1 AU (1.496 x 10(exp 8)km). The calibrated cells have been returned to their owners and can now be used as reference standards in simulator testing of cells and arrays.

  2. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.

    PubMed

    Dos Santos, Serge; Prevorovsky, Zdenek

    2011-08-01

    Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Results of the 2000 JPL Balloon Flight Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Mueller, R. L.; Weiss, R. S.

    2001-01-01

    The 2000 solar cell calibration balloon flight campaign consisted of two flights, which occurred on June 27, 2000, and July 5, 2000. All objectives of the flight program were met. Sixty-two modules were carried to an altitude of approximately 120,000 ft (36.6 km). Full I-V curves were measured on sixteen of these modules, and output at a fixed load was measured on thirty-seven modules (forty-six cells), with some modules repeated on the second flight. Nine modules were flown for temperature measurement only. This data was corrected to 28 C and to 1 AU (1.496x10(exp 8) km). The calibrated cells have been returned to their owners and can now be used as reference standards in simulator testing of cells and arrays.

  4. Nonlinear dynamics induced in a structure by seismic and environmental loading

    DOE PAGES

    Gueguen, Philippe; Johnson, Paul Allan; Roux, Philippe

    2016-07-26

    In this study,we show that under very weak dynamic and quasi-static deformation, that is orders of magnitude below the yield deformation of the equivalent stress strain curve (around 10 -3), the elastic parameters of a civil engineering structure (resonance frequency and damping) exhibit nonlinear softening and recovery. These observations bridge the gap between laboratory and seismic scales where elastic nonlinear behavior has been previously observed. Under weak seismic or atmospheric loading, modal frequencies are modified by around 1% and damping by more than 100% for strain levels between 10 -7 and 10 -4. These observations support the concept of universalmore » behavior of nonlinear elastic behavior in diverse systems, including granular materials and damaged solids that scale from millimeter dimensions to the scale of structures to fault dimensions in the Earth.« less

  5. Nonlinear dynamics induced in a structure by seismic and environmental loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gueguen, Philippe; Johnson, Paul Allan; Roux, Philippe

    In this study,we show that under very weak dynamic and quasi-static deformation, that is orders of magnitude below the yield deformation of the equivalent stress strain curve (around 10 -3), the elastic parameters of a civil engineering structure (resonance frequency and damping) exhibit nonlinear softening and recovery. These observations bridge the gap between laboratory and seismic scales where elastic nonlinear behavior has been previously observed. Under weak seismic or atmospheric loading, modal frequencies are modified by around 1% and damping by more than 100% for strain levels between 10 -7 and 10 -4. These observations support the concept of universalmore » behavior of nonlinear elastic behavior in diverse systems, including granular materials and damaged solids that scale from millimeter dimensions to the scale of structures to fault dimensions in the Earth.« less

  6. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    NASA Astrophysics Data System (ADS)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  7. A study of Lusitano mare lactation curve with Wood's model.

    PubMed

    Santos, A S; Silvestre, A M

    2008-02-01

    Milk yield and composition data from 7 nursing Lusitano mares (450 to 580 kg of body weight and 2 to 9 parities) were used in this study (5 measurements per mare for milk yield and 8 measurements for composition). Wood's lactation model was used to describe milk fat, protein, and lactose lactation curves. Mean values for the concentration of major milk components across the lactation period (180 d) were 5.9 g/kg of fat, 18.4 g/kg of protein, and 60.8 g/kg of lactose. Milk fat and protein (g/kg) decreased and lactose (g/kg) increased during the 180 d of lactation. Curves for milk protein and lactose yields (g) were similar in shape to the milk yield curve; protein yield peaked at 307 g on d 10 and lactose peaked at 816 g on d 45. The fat (g) curve was different in shape compared with milk, protein, and lactose yields. Total production of the major milk constituents throughout the 180 d of lactation was estimated to be 12.0, 36.1, and 124 kg for fat, protein, and lactose, respectively. The algebraic model fitted by a nonlinear regression procedure to the data resulted in reasonable prediction curves for milk yield (R(a)(2) of 0.89) and the major constituents (R(a)(2) ranged from 0.89 to 0.95). The lactation curves of major milk constituents in Lusitano mares were similar, both in shape and values, to those found in other horse breeds. The established curves facilitate the estimation of milk yield and variation of milk constituents at different stages of lactation for both nursing and dairy mares, providing important information relative to weaning time and foal supplementation.

  8. Curved-flow, rolling-flow, and oscillatory pure-yawing wind-tunnel test methods for determination of dynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Grafton, S. B.; Lutze, F. H.

    1981-01-01

    The test capabilities of the Stability Wind Tunnel of the Virginia Polytechnic Institute and State University are described, and calibrations for curved and rolling flow techniques are given. Oscillatory snaking tests to determine pure yawing derivatives are considered. Representative aerodynamic data obtained for a current fighter configuration using the curved and rolling flow techniques are presented. The application of dynamic derivatives obtained in such tests to the analysis of airplane motions in general, and to high angle of attack flight conditions in particular, is discussed.

  9. Enhanced nonlinear current-voltage behavior in Au nanoparticle dispersed CaCu 3 Ti 4 O 12 composite films

    NASA Astrophysics Data System (ADS)

    Chen, Cong; Wang, Can; Ning, Tingyin; Lu, Heng; Zhou, Yueliang; Ming, Hai; Wang, Pei; Zhang, Dongxiang; Yang, Guozhen

    2011-10-01

    An enhanced nonlinear current-voltage behavior has been observed in Au nanoparticle dispersed CaCu 3Ti 4O 12 composite films. The double Schottky barrier model is used to explain the enhanced nonlinearity in I-V curves. According to the energy-band model and fitting result, the nonlinearity in Au: CCTO film is mainly governed by thermionic emission in the reverse-biased Schottky barrier. This result not only supports the mechanism of double Schottky barrier in CCTO, but also indicates that the nonlinearity of current-voltage behavior could be improved in nanometal composite films, which has great significance for the resistance switching devices.

  10. Seismic fragility curves of bridge piers accounting for ground motions in Korea

    NASA Astrophysics Data System (ADS)

    Nguyen, Duy-Duan; Lee, Tae-Hyung

    2018-04-01

    Korea is located in a slight-to-moderate seismic zone. Nevertheless, several studies pointed that the peak earthquake magnitude in the region can be reached to approximately 6.5. Accordingly, a seismic vulnerability evaluation of the existing structures accounting for ground motions in Korea is momentous. The purpose of this paper is to develop seismic fragility curves for bridge piers of a steel box girder bridge equipped with and without base isolators based on a set of ground motions recorded in Korea. A finite element simulation platform, OpenSees, is utilized to perform nonlinear time history analyses of the bridges. A series of damage states is defined based on a damage index which is expressed in terms of the column displacement ductility ratio. The fragility curves based on Korean motions were thereafter compared with the fragility curves generated using worldwide earthquakes to assess the effect of the two ground motion groups on the seismic fragility curves of the bridge piers. The results reveal that both non- and base-isolated bridge piers are less vulnerable during the Korean ground motions than that under worldwide earthquakes.

  11. Astrometric Calibration and Performance of the Dark Energy Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, G. M.; Armstrong, R.; Plazas, A. A.

    2017-05-30

    We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520~Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry ofmore » $>10^7$ stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the RMS variation in aperture magnitudes of bright stars on cloudless nights down to 2--3 mmag, with <1 mmag of correlated photometric errors for stars separated by $$\\ge20$$". On cloudless nights, any departures of the exposure zeropoints from a secant airmass law exceeding >1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6" and 8" diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2-degree field drifts over months by up to $$\\pm7$$ mmag, in a nearly-wavelength-independent low-order pattern. We find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.« less

  12. The Role of Nonlinear Gradients in Parallel Imaging: A k-Space Based Analysis.

    PubMed

    Galiana, Gigi; Stockmann, Jason P; Tam, Leo; Peters, Dana; Tagare, Hemant; Constable, R Todd

    2012-09-01

    Sequences that encode the spatial information of an object using nonlinear gradient fields are a new frontier in MRI, with potential to provide lower peripheral nerve stimulation, windowed fields of view, tailored spatially-varying resolution, curved slices that mirror physiological geometry, and, most importantly, very fast parallel imaging with multichannel coils. The acceleration for multichannel images is generally explained by the fact that curvilinear gradient isocontours better complement the azimuthal spatial encoding provided by typical receiver arrays. However, the details of this complementarity have been more difficult to specify. We present a simple and intuitive framework for describing the mechanics of image formation with nonlinear gradients, and we use this framework to review some the main classes of nonlinear encoding schemes.

  13. Use of the Airborne Visible/Infrared Imaging Spectrometer to calibrate the optical sensor on board the Japanese Earth Resources Satellite-1

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Conel, James E.; Vandenbosch, Jeannette; Shimada, Masanobu

    1993-01-01

    We describe an experiment to calibrate the optical sensor (OPS) on board the Japanese Earth Resources Satellite-1 with data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). On 27 Aug. 1992 both the OPS and AVIRIS acquired data concurrently over a calibration target on the surface of Rogers Dry Lake, California. The high spectral resolution measurements of AVIRIS have been convolved to the spectral response curves of the OPS. These data in conjunction with the corresponding OPS digitized numbers have been used to generate the radiometric calibration coefficients for the eight OPS bands. This experiment establishes the suitability of AVIRIS for the calibration of spaceborne sensors in the 400 to 2500 nm spectral region.

  14. Unsteady density-current equations for highly curved terrain

    NASA Technical Reports Server (NTRS)

    Sivakumaran, N. S.; Dressler, R. F.

    1989-01-01

    New nonlinear partial differential equations containing terrain curvature and its rate of change are derived that describe the flow of an atmospheric density current. Unlike the classical hydraulic-type equations for density currents, the new equations are valid for two-dimensional, gradually varied flow over highly curved terrain, hence suitable for computing unsteady (or steady) flows over arbitrary mountain/valley profiles. The model assumes the atmosphere above the density current exerts a known arbitrary variable pressure upon the unknown interface. Later this is specialized to the varying hydrostatic pressure of the atmosphere above. The new equations yield the variable velocity distribution, the interface position, and the pressure distribution that contains a centrifugal component, often significantly larger than its hydrostatic component. These partial differential equations are hyperbolic, and the characteristic equations and characteristic directions are derived. Using these to form a characteristic mesh, a hypothetical unsteady curved-flow problem is calculated, not based upon observed data, merely as an example to illustrate the simplicity of their application to unsteady flows over mountains.

  15. Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices

    PubMed Central

    Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser

    2012-01-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.

  16. Calibration of the Minolta SPAD-502 leaf chlorophyll meter.

    PubMed

    Markwell, J; Osterman, J C; Mitchell, J L

    1995-01-01

    Use of leaf meters to provide an instantaneous assessment of leaf chlorophyll has become common, but calibration of meter output into direct units of leaf chlorophyll concentration has been difficult and an understanding of the relationship between these two parameters has remained elusive. We examined the correlation of soybean (Glycine max) and maize (Zea mays L.) leaf chlorophyll concentration, as measured by organic extraction and spectrophotometric analysis, with output (M) of the Minolta SPAD-502 leaf chlorophyll meter. The relationship is non-linear and can be described by the equation chlorophyll (μmol m(-2))=10((M0.265)), r (2)=0.94. Use of such an exponential equation is theoretically justified and forces a more appropriate fit to a limited data set than polynomial equations. The exact relationship will vary from meter to meter, but will be similar and can be readily determined by empirical methods. The ability to rapidly determine leaf chlorophyll concentrations by use of the calibration method reported herein should be useful in studies on photosynthesis and crop physiology.

  17. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    PubMed

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. VizieR Online Data Catalog: SNLS and SDSS SN surveys photometric calibration (Betoule+, 2013)

    NASA Astrophysics Data System (ADS)

    Betoule, M.; Marriner, J.; Regnault, N.; Cuillandre, J.-C.; Astier, P.; Guy, J.; Balland, C.; El, Hage P.; Hardin, D.; Kessler, R.; Le Guillou, L.; Mosher, J.; Pain, R.; Rocci, P.-F.; Sako, M.; Schahmaneche, K.

    2012-11-01

    We present a joined photometric calibration for the SNLS and the SDSS supernova surveys. Our main delivery are catalogs of natural AB magnitudes for a large set of selected tertiary standard stars covering the science field of both surveys. Those catalogs are calibrated to the AB flux scale through observations of 5 primary spectrophotometric standard stars, for which HST-STIS spectra are available in the CALSPEC database. The estimate of the uncertainties associated to this calibration are delivered as a single covariance matrix. We also provide a model of the transmission efficiency of the SNLS photometric instrument MegaCam. Those transmission functions are required for the interpretation of MegaCam natural magnitudes in term of physical fluxes. Similar curves for the SDSS photometric instrument have been published in Doi et al. (2010AJ....139.1628D). Last, we release the measured magnitudes of the five CALSPEC standard stars in the magnitude system of the tertiary catalogs. This makes it possible to update the calibration of the tertiary catalogs if CALSPEC spectra for the primary standards are revised. (11 data files).

  19. A mathematical model to describe the nonlinear elastic properties of the gastrocnemius tendon of chickens.

    PubMed

    Foutz, T L

    1991-03-01

    A phenomenological model was developed to describe the nonlinear elastic behavior of the avian gastrocnemius tendon. Quasistatic uniaxial tensile tests were used to apply a deformation and resulting load on the tendon at a deformation rate of 5 mm/min. Plots of deformation versus load indicated a nonlinear loading response. By calculating engineering stress and engineering strain, the experimental data were normalized for tendon shape. The elastic response was determined from stress-strain curves and was found to vary with engineering strain. The response to the applied engineering strain could best be described by a mathematical model that combined a linear function and a nonlinear function. Three parameters in the model were developed to represent the nonlinear elastic behavior of the tendon, thereby allowing analysis of elasticity without prior knowledge of engineering strain. This procedure reduced the amount of data needed for the statistical analysis of nonlinear elasticity.

  20. Linear and nonlinear analysis of fluid slosh dampers

    NASA Astrophysics Data System (ADS)

    Sayar, B. A.; Baumgarten, J. R.

    1982-11-01

    A vibrating structure and a container partially filled with fluid are considered coupled in a free vibration mode. To simplify the mathematical analysis, a pendulum model to duplicate the fluid motion and a mass-spring dashpot representing the vibrating structure are used. The equations of motion are derived by Lagrange's energy approach and expressed in parametric form. For a wide range of parametric values the logarithmic decrements of the main system are calculated from theoretical and experimental response curves in the linear analysis. However, for the nonlinear analysis the theoretical and experimental response curves of the main system are compared. Theoretical predictions are justified by experimental observations with excellent agreement. It is concluded finally that for a proper selection of design parameters, containers partially filled with viscous fluids serve as good vibration dampers.