Sample records for calibration curve nonlinearity

  1. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  2. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  3. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  4. A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems

    PubMed Central

    Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.

    2013-01-01

    Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415

  5. Accuracy and efficiency of published film dosimetry techniques using a flat-bed scanner and EBT3 film.

    PubMed

    Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T

    2018-03-01

    Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.

  6. Carbon-14 wiggle-match dating of peat deposits: advantages and limitations

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; van Geel, Bas; Mauquoy, Dmitri; van der Plicht, Johannes

    2004-02-01

    Carbon-14 wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a series of closely spaced peat 14C dates with the 14C calibration curve. The method of WMD is discussed, and its advantages and limitations are compared with calibration of individual dates. A numerical approach to WMD is introduced that makes it possible to assess the precision of WMD chronologies. During several intervals of the Holocene, the 14C calibration curve shows less pronounced fluctuations. We assess whether wiggle-matching is also a feasible strategy for these parts of the 14C calibration curve. High-precision chronologies, such as obtainable with WMD, are needed for studies of rapid climate changes and their possible causes during the Holocene. Copyright

  7. Accounting For Nonlinearity In A Microwave Radiometer

    NASA Technical Reports Server (NTRS)

    Stelzried, Charles T.

    1991-01-01

    Simple mathematical technique found to account adequately for nonlinear component of response of microwave radiometer. Five prescribed temperatures measured to obtain quadratic calibration curve. Temperature assumed to vary quadratically with reading. Concept not limited to radiometric application; applicable to other measuring systems in which relationships between quantities to be determined and readings of instruments differ slightly from linearity.

  8. Nonlinear price impact from linear models

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2017-12-01

    The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.

  9. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

    2003-06-01

    14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

  10. Measuring the nonlinear elastic properties of tissue-like phantoms.

    PubMed

    Erkamp, Ramon Q; Skovoroda, Andrei R; Emelianov, Stanislav Y; O'Donnell, Matthew

    2004-04-01

    A direct mechanical system simultaneously measuring external force and deformation of samples over a wide dynamic range is used to obtain force-displacement curves of tissue-like phantoms under plain strain deformation. These measurements, covering a wide deformation range, then are used to characterize the nonlinear elastic properties of the phantom materials. The model assumes incompressible media, in which several strain energy potentials are considered. Finite-element analysis is used to evaluate the performance of this material characterization procedure. The procedures developed allow calibration of nonlinear elastic phantoms for elasticity imaging experiments and finite-element simulations.

  11. Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor

    NASA Astrophysics Data System (ADS)

    Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.

    2018-01-01

    Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.

  12. Instrumentation and signal processing for the detection of heavy water using off axis-integrated cavity output spectroscopy technique

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.

    2018-02-01

    An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.

  13. THE USE OF QUENCHING IN A LIQUID SCINTILLATION COUNTER FOR QUANTITATIVE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, G.V.

    1963-01-01

    Quenching was used to quantitatively determine the amonnt of quenching agent present. A sealed promethium147 source was prepared to be used for the count rate determinations. Two methods to determine the amount of quenching agent present in a sample were developed. One method related the count rate of a sample containing a quenching agent to the amount of quenching agent present. Calibration curves were plotted using both color and chemical quenchers. The quenching agents used were: F.D.C. Orange No. 2, F.D.C. Yellow No. 3, F.D.C. Yellow No. 4, Scarlet Red, acetone, benzaldehyde, and carbon tetrachloride. the color quenchers gave amore » linear-relationship, while the chemical quenchers gave a non-linear relationship. Quantities of the color quenchers between about 0.008 mg and 0.100 mg can be determined with an error less than 5%. The calibration curves were found to be usable over a long period of time. The other method related the change in the ratio of the count rates in two voltage windows to the amount of quenching agent present. The quenchers mentioned above were used. Calibration curves were plotted for both the color and chemical quenchers. The relationships of ratio versus amount of quencher were non-linear in each case. It was shown that the reproducibility of the count rate and the ratio was independent of the amount of quencher present but was dependent on the count rate. At count rates above 10,000 counts per minute the reproducibility was better than 1%. (TCO)« less

  14. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    PubMed

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. A wide-frequency-range air-jet shaker

    NASA Technical Reports Server (NTRS)

    Herr, Robert W

    1957-01-01

    This paper presents a description of a simple air-jet shaker. Its force can be calibrated statically and appears to be constant with frequency. It is relatively easy to use, and it has essentially massless characteristics. This shaker is applied to define the unstable branch of a frequency-response curve obtained for a nonlinear spring with a single degree of freedom.

  16. Errors introduced by dose scaling for relative dosimetry

    PubMed Central

    Watanabe, Yoichi; Hayashi, Naoki

    2012-01-01

    Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658

  17. Computational simulation of matrix micro-slip bands in SiC/Ti-15 composite

    NASA Technical Reports Server (NTRS)

    Mital, S. K.; Lee, H.-J.; Murthy, P. L. N.; Chamis, C. C.

    1992-01-01

    Computational simulation procedures are used to identify the key deformation mechanisms for (0)(sub 8) and (90)(sub 8) SiC/Ti-15 metal matrix composites. The computational simulation procedures employed consist of a three-dimensional finite-element analysis and a micromechanics based computer code METCAN. The interphase properties used in the analysis have been calibrated using the METCAN computer code with the (90)(sub 8) experimental stress-strain curve. Results of simulation show that although shear stresses are sufficiently high to cause the formation of some slip bands in the matrix concentrated mostly near the fibers, the nonlinearity in the composite stress-strain curve in the case of (90)(sub 8) composite is dominated by interfacial damage, such as microcracks and debonding rather than microplasticity. The stress-strain curve for (0)(sub 8) composite is largely controlled by the fibers and shows only slight nonlinearity at higher strain levels that could be the result of matrix microplasticity.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison, H; Menon, G; Sloboda, R

    The purpose of this study was to investigate the accuracy of radiochromic film calibration procedures used in external beam radiotherapy when applied to I-125 brachytherapy sources delivering higher doses, and to determine any necessary modifications to achieve similar accuracy in absolute dose measurements. GafChromic EBT3 film was used to measure radiation doses upwards of 35 Gy from 6 MV, 75 kVp and (∼28 keV) I-125 photon sources. A custom phantom was used for the I-125 irradiations to obtain a larger film area with nearly constant dose to reduce the effects of film heterogeneities on the optical density (OD) measurements. RGBmore » transmission images were obtained with an Epson 10000XL flatbed scanner, and calibration curves relating OD and dose using a rational function were determined for each colour channel and at each energy using a non-linear least square minimization method. Differences found between the 6 MV calibration curve and those for the lower energy sources are large enough that 6 MV beams should not be used to calibrate film for low-energy sources. However, differences between the 75 kVp and I-125 calibration curves were quite small; indicating that 75 kVp is a good choice. Compared with I-125 irradiation, this gives the advantages of lower type B uncertainties and markedly reduced irradiation time. To obtain high accuracy calibration for the dose range up to 35 Gy, two-segment piece-wise fitting was required. This yielded absolute dose measurement accuracy above 1 Gy of ∼2% for 75 kVp and ∼5% for I-125 seed exposures.« less

  19. SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devic, S; Tomic, N; DeBlois, F

    2016-06-15

    Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response bymore » using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis is inventor and runs a consulting company for radiochromic films.« less

  20. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  1. Wavelength selection-based nonlinear calibration for transcutaneous blood glucose sensing using Raman spectroscopy

    PubMed Central

    Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.

    2011-01-01

    While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336

  2. A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors

    PubMed Central

    Liu, Shibin

    2018-01-01

    Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448

  3. Apollo 16/AS-511/LM-11 operational calibration curves. Volume 1: Calibration curves for command service module CSM 113

    NASA Technical Reports Server (NTRS)

    Demoss, J. F. (Compiler)

    1971-01-01

    Calibration curves for the Apollo 16 command service module pulse code modulation downlink and onboard display are presented. Subjects discussed are: (1) measurement calibration curve format, (2) measurement identification, (3) multi-mode calibration data summary, (4) pulse code modulation bilevel events listing, and (5) calibration curves for instrumentation downlink and meter link.

  4. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.

  5. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  6. WE-E-18A-04: Precision In-Vivo Dosimetry Using Optically Stimulated Luminescence Dosimeters and a Pulsed-Stimulating Dose Reader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q; Herrick, A; Hoke, S

    Purpose: A new readout technology based on pulsed optically stimulating luminescence is introduced (microSTARii, Landauer, Inc, Glenwood, IL60425). This investigation searches for approaches that maximizes the dosimetry accuracy in clinical applications. Methods: The sensitivity of each optically stimulated luminescence dosimeter (OSLD) was initially characterized by exposing it to a given radiation beam. After readout, the luminescence signal stored in the OSLD was erased by exposing its sensing area to a 21W white LED light for 24 hours. A set of OSLDs with consistent sensitivities was selected to calibrate the dose reader. Higher order nonlinear curves were also derived from themore » calibration readings. OSLDs with cumulative doses below 15 Gy were reused. Before an in-vivo dosimetry, the OSLD luminescence signal was erased with the white LED light. Results: For a set of 68 manufacturer-screened OSLDs, the measured sensitivities vary in a range of 17.3%. A sub-set of the OSLDs with sensitivities within ±1% was selected for the reader calibration. Three OSLDs in a group were exposed to a given radiation. Nine groups were exposed to radiation doses ranging from 0 to 13 Gy. Additional verifications demonstrated that the reader uncertainty is about 3%. With an external calibration function derived by fitting the OSLD readings to a 3rd-order polynomial, the dosimetry uncertainty dropped to 0.5%. The dose-luminescence response curves of individual OSLDs were characterized. All curves converge within 1% after the sensitivity correction. With all uncertainties considered, the systematic uncertainty is about 2%. Additional tests emulating in-vivo dosimetry by exposing the OSLDs under different radiation sources confirmed the claim. Conclusion: The sensitivity of individual OSLD should be characterized initially. A 3rd-order polynomial function is a more accurate representation of the dose-luminescence response curve. The dosimetry uncertainty specified by the manufacturer is 4%. Following the proposed approach, it can be controlled to 2%.« less

  7. Nonlinear bias analysis and correction of microwave temperature sounder observations for FY-3C meteorological satellite

    NASA Astrophysics Data System (ADS)

    Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin

    2018-01-01

    The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.

  8. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  9. Residual mode correction in calibrating nonlinear damper for vibration control of flexible structures

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Chen, Lin

    2017-10-01

    Residual mode correction is found crucial in calibrating linear resonant absorbers for flexible structures. The classic modal representation augmented with stiffness and inertia correction terms accounting for non-resonant modes improves the calibration accuracy and meanwhile avoids complex modal analysis of the full system. This paper explores the augmented modal representation in calibrating control devices with nonlinearity, by studying a taut cable attached with a general viscous damper and its Equivalent Dynamic Systems (EDSs), i.e. the augmented modal representations connected to the same damper. As nonlinearity is concerned, Frequency Response Functions (FRFs) of the EDSs are investigated in detail for parameter calibration, using the harmonic balance method in combination with numerical continuation. The FRFs of the EDSs and corresponding calibration results are then compared with those of the full system documented in the literature for varied structural modes, damper locations and nonlinearity. General agreement is found and in particular the EDS with both stiffness and inertia corrections (quasi-dynamic correction) performs best among available approximate methods. This indicates that the augmented modal representation although derived from linear cases is applicable to a relatively wide range of damper nonlinearity. Calibration of nonlinear devices by this means still requires numerical analysis while the efficiency is largely improved owing to the system order reduction.

  10. Flight Model Discharge System

    DTIC Science & Technology

    1989-09-01

    Itterconnection wiring diagram for the ESA ............................... 34 3-13 Typical gain versus total count curve for CEM...42 3-16 Calibration curve for energy bin 12 of the ion ESA ....................... 43 3-17 Flight ESA S/N001...Calibration curves for SPM S/N001 ......................................... 67 4-11 Calibration curves for SPM S/N002

  11. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  12. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  13. Estimation of Pulse Transit Time as a Function of Blood Pressure Using a Nonlinear Arterial Tube-Load Model.

    PubMed

    Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna

    2017-07-01

    pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.

  14. Effects of dilution rates, animal species and instruments on the spectrophotometric determination of sperm counts.

    PubMed

    Rondeau, M; Rouleau, M

    1981-06-01

    Using semen from bull, boar and stallion as well as different spectrophotometers, we established the calibration curves relating the optical density of a sperm sample to the sperm count obtained on the hemacytometer. The results show that, for a given spectrophotometer, the calibration curve is not characteristic of the animal species we studied. The differences in size of the spermatozoa are probably too small to account for the anticipated specificity of the calibration curve. Furthermore, the fact that different dilution rates must be used, because of the vastly different concentrations of spermatozoa which is characteristic of those species, has no effect on the calibration curves since the dilution rate is shown to be artefactual. On the other hand, for a given semen, the calibration curve varies depending upon the spectrophotometry used. However, if two instruments have the same characteristic in terms of spectral bandwidth, the calibration curves are not statistically different.

  15. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.

  16. Influence of Ultrasonic Nonlinear Propagation on Hydrophone Calibration Using Two-Transducer Reciprocity Method

    NASA Astrophysics Data System (ADS)

    Yoshioka, Masahiro; Sato, Sojun; Kikuchi, Tsuneo; Matsuda, Yoichi

    2006-05-01

    In this study, the influence of ultrasonic nonlinear propagation on hydrophone calibration by the two-transducer reciprocity method is investigated quantitatively using the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation. It is proposed that the correction for the diffraction and attenuation of ultrasonic waves used in two-transducer reciprocity calibration can be derived using the KZK equation to remove the influence of nonlinear propagation. The validity of the correction is confirmed by comparing the sensitivities calibrated by the two-transducer reciprocity method and laser interferometry.

  17. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  18. Evaluation of statistical and rainfall-runoff models for predicting historical daily streamflow time series in the Des Moines and Iowa River watersheds

    USGS Publications Warehouse

    Farmer, William H.; Knight, Rodney R.; Eash, David A.; Kasey J. Hutchinson,; Linhart, S. Mike; Christiansen, Daniel E.; Archfield, Stacey A.; Over, Thomas M.; Kiang, Julie E.

    2015-08-24

    Daily records of streamflow are essential to understanding hydrologic systems and managing the interactions between human and natural systems. Many watersheds and locations lack streamgages to provide accurate and reliable records of daily streamflow. In such ungaged watersheds, statistical tools and rainfall-runoff models are used to estimate daily streamflow. Previous work compared 19 different techniques for predicting daily streamflow records in the southeastern United States. Here, five of the better-performing methods are compared in a different hydroclimatic region of the United States, in Iowa. The methods fall into three classes: (1) drainage-area ratio methods, (2) nonlinear spatial interpolations using flow duration curves, and (3) mechanistic rainfall-runoff models. The first two classes are each applied with nearest-neighbor and map-correlated index streamgages. Using a threefold validation and robust rank-based evaluation, the methods are assessed for overall goodness of fit of the hydrograph of daily streamflow, the ability to reproduce a daily, no-fail storage-yield curve, and the ability to reproduce key streamflow statistics. As in the Southeast study, a nonlinear spatial interpolation of daily streamflow using flow duration curves is found to be a method with the best predictive accuracy. Comparisons with previous work in Iowa show that the accuracy of mechanistic models with at-site calibration is substantially degraded in the ungaged framework.

  19. Refinement of moisture calibration curves for nuclear gage : interim report no. 1.

    DOT National Transportation Integrated Search

    1972-01-01

    This study was initiated to determine the correct moisture calibration curves for different nuclear gages. It was found that the Troxler Model 227 had a linear response between count ratio and moisture content. Also, the two calibration curves for th...

  20. The validation of the Z-Scan technique for the determination of plasma glucose

    NASA Astrophysics Data System (ADS)

    Alves, Sarah I.; Silva, Elaine A. O.; Costa, Simone S.; Sonego, Denise R. N.; Hallack, Maira L.; Coppini, Ornela L.; Rowies, Fernanda; Azzalis, Ligia A.; Junqueira, Virginia B. C.; Pereira, Edimar C.; Rocha, Katya C.; Fonseca, Fernando L. A.

    2013-11-01

    Glucose is the main energy source for the human body. The concentration of blood glucose is regulated by several hormones including both antagonists: insulin and glucagon. The quantification of glucose in the blood is used for diagnosing metabolic disorders of carbohydrates, such as diabetes, idiopathic hypoglycemia and pancreatic diseases. Currently, the methodology used for this determination is the enzymatic colorimetric with spectrophotometric. This study aimed to validate the use of measurements of nonlinear optical properties of plasma glucose via the Z-Scan technique. For this we used samples of calibrator patterns that simulate commercial samples of patients (ELITech ©). Besides calibrators, serum glucose levels within acceptable reference values (normal control serum - Brazilian Society of Clinical Pathology and Laboratory Medicine) and also overestimated (pathological control serum - Brazilian Society of Clinical Pathology and Laboratory Medicine) were used in the methodology proposal. Calibrator dilutions were performed and determined by the Z-Scan technique for the preparation of calibration curve. In conclusion, Z-Scan method can be used to determinate glucose levels in biological samples with enzymatic colorimetric reaction and also to apply the same quality control parameters used in biochemistry clinical.

  1. Evaluating the relationship between leaf chlorophyll concentration and SPAD-502 chlorophyll meter readings.

    PubMed

    Uddling, J; Gelang-Alfredsson, J; Piikki, K; Pleijel, H

    2007-01-01

    Relationships between chlorophyll concentration ([chl]) and SPAD values were determined for birch, wheat, and potato. For all three species, the relationships were non-linear with an increasing slope with increasing SPAD. The relationships for birch and wheat were strong (r (2) approximately 0.9), while the potato relationship was comparatively weak (r (2) approximately 0.5). Birch and wheat had very similar relationships when the chlorophyll concentration was expressed per unit leaf area, but diverged when it was expressed per unit fresh weight. Furthermore, wheat showed similar SPAD-[chl] relationships for two different cultivars and during two different growing seasons. The curvilinear shape of the SPAD-[chl] relationships agreed well with the simulated effects of non-uniform chlorophyll distribution across the leaf surface and multiple scattering, causing deviations from linearity in the high and low SPAD range, respectively. The effect of non-uniformly distributed chlorophyll is likely to be more important in explaining the non-linearity in the empirical relationships, since the effect of scattering was predicted to be comparatively weak. The simulations were based on the algorithm for the calculation of SPAD-502 output values. We suggest that SPAD calibration curves should generally be parameterised as non-linear equations, and we hope that the relationships between [chl] and SPAD and the simulations of the present study can facilitate the interpretation of chlorophyll meter calibrations in relation to optical properties of leaves in future studies.

  2. Novel Approach for Prediction of Localized Necking in Case of Nonlinear Strain Paths

    NASA Astrophysics Data System (ADS)

    Drotleff, K.; Liewald, M.

    2017-09-01

    Rising customer expectations regarding design complexity and weight reduction of sheet metal components alongside with further reduced time to market implicate increased demand for process validation using numerical forming simulation. Formability prediction though often is still based on the forming limit diagram first presented in the 1960s. Despite many drawbacks in case of nonlinear strain paths and major advances in research in the recent years, the forming limit curve (FLC) is still one of the most commonly used criteria for assessing formability of sheet metal materials. Especially when forming complex part geometries nonlinear strain paths may occur, which cannot be predicted using the conventional FLC-Concept. In this paper a novel approach for calculation of FLCs for nonlinear strain paths is presented. Combining an interesting approach for prediction of FLC using tensile test data and IFU-FLC-Criterion a model for prediction of localized necking for nonlinear strain paths can be derived. Presented model is purely based on experimental tensile test data making it easy to calibrate for any given material. Resulting prediction of localized necking is validated using an experimental deep drawing specimen made of AA6014 material having a sheet thickness of 1.04 mm. The results are compared to IFU-FLC-Criterion based on data of pre-stretched Nakajima specimen.

  3. A Bionic Polarization Navigation Sensor and Its Calibration Method.

    PubMed

    Zhao, Huijie; Xu, Wujian

    2016-08-03

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects' polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor's signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation.

  4. A Bionic Polarization Navigation Sensor and Its Calibration Method

    PubMed Central

    Zhao, Huijie; Xu, Wujian

    2016-01-01

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects’ polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor’s signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation. PMID:27527171

  5. Nonlinear optical imaging for sensitive detection of crystals in bulk amorphous powders.

    PubMed

    Kestur, Umesh S; Wanapun, Duangporn; Toth, Scott J; Wegiel, Lindsay A; Simpson, Garth J; Taylor, Lynne S

    2012-11-01

    The primary aim of this study was to evaluate the utility of second-order nonlinear imaging of chiral crystals (SONICC) to quantify crystallinity in drug-polymer blends, including solid dispersions. Second harmonic generation (SHG) can potentially exhibit scaling with crystallinity between linear and quadratic depending on the nature of the source, and thus, it is important to determine the response of pharmaceutical powders. Physical mixtures containing different proportions of crystalline naproxen and hydroxyl propyl methyl cellulose acetate succinate (HPMCAS) were prepared by blending and a dispersion was produced by solvent evaporation. A custom-built SONICC instrument was used to characterize the SHG intensity as a function of the crystalline drug fraction in the various samples. Powder X-ray diffraction (PXRD) and Raman spectroscopy were used as complementary methods known to exhibit linear scaling. SONICC was able to detect crystalline drug even in the presence of 99.9 wt % HPMCAS in the binary mixtures. The calibration curve revealed a linear dynamic range with a R(2) value of 0.99 spanning the range from 0.1 to 100 wt % naproxen with a root mean square error of prediction of 2.7%. Using the calibration curve, the errors in the validation samples were in the range of 5%-10%. Analysis of a 75 wt % HPMCAS-naproxen solid dispersion with SONICC revealed the presence of crystallites at an earlier time point than could be detected with PXRD and Raman spectroscopy. In addition, results from the crystallization kinetics experiment using SONICC were in good agreement with Raman spectroscopy and PXRD. In conclusion, SONICC has been found to be a sensitive technique for detecting low levels (0.1% or lower) of crystallinity, even in the presence of large quantities of a polymer. Copyright © 2012 Wiley-Liss, Inc.

  6. Linear and nonlinear trending and prediction for AVHRR time series data

    NASA Technical Reports Server (NTRS)

    Smid, J.; Volf, P.; Slama, M.; Palus, M.

    1995-01-01

    The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.

  7. Nonlinear Growth Models in M"plus" and SAS

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam

    2009-01-01

    Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…

  8. Investigation of Light-Emitting Diode (LED) Point Light Source Color Visibility against Complex Multicolored Backgrounds

    DTIC Science & Technology

    2017-11-01

    sent from light-emitting diodes (LEDs) of 5 colors ( green , red, white, amber, and blue). Experiment 1 involved controlled laboratory measurements of...A-4 Red LED calibration curves and quadratic curve fits with R2 values . 37 Fig. A-5 Green LED calibration curves and quadratic curve fits with R2...36 Table A-4 Red LED calibration measurements ................................................... 36 Table A-5 Green LED

  9. Cyclic tensile response of a pre-tensioned polyurethane

    NASA Astrophysics Data System (ADS)

    Nie, Yizhou; Liao, Hangjie; Chen, Weinong W.

    2018-05-01

    In the research reported in this paper, we subject a polyurethane to uniaxial tensile loading at a quasi-static strain rate, a high strain rate and a jumping strain rate where the specimen is under quasi-static pre-tension and is further subjected to a dynamic cyclic loading using a modified Kolsky tension bar. The results obtained at the quasi-static and high strain rate clearly show that the mechanical response of this material is significantly rate sensitive. The rate-jumping experimental results show that the response of the material behavior is consistent before jumping. After jumping the stress-strain response of the material does not jump to the corresponding high-rate curve. Rather it approaches the high-rate curve asymptotically. A non-linear hyper-viscoelastic (NLHV) model, after having been calibrated by monotonic quasi-static and high-rate experimental results, was found to be capable of describing the material tensile behavior under such rate jumping conditions.

  10. A new form of the calibration curve in radiochromic dosimetry. Properties and results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; B

    Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve hasmore » been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.« less

  11. A new form of the calibration curve in radiochromic dosimetry. Properties and results.

    PubMed

    Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio

    2016-07-01

    This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer-Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.

  12. Imaging technique for real-time temperature monitoring during cryotherapy of lesions.

    PubMed

    Petrova, Elena; Liopo, Anton; Nadvoretskiy, Vyacheslav; Ermilov, Sergey

    2016-11-01

    Noninvasive real-time temperature imaging during thermal therapies is able to significantly improve clinical outcomes. An optoacoustic (OA) temperature monitoring method is proposed for noninvasive real-time thermometry of vascularized tissue during cryotherapy. The universal temperature-dependent optoacoustic response (ThOR) of red blood cells (RBCs) is employed to convert reconstructed OA images to temperature maps. To obtain the temperature calibration curve for intensity-normalized OA images, we measured ThOR of 10 porcine blood samples in the range of temperatures from 40°C to ?16°C and analyzed the data for single measurement variations. The nonlinearity (?Tmax) and the temperature of zero OA response (T0) of the calibration curve were found equal to 11.4±0.1°C and ?13.8±0.1°C, respectively. The morphology of RBCs was examined before and after the data collection confirming cellular integrity and intracellular compartmentalization of hemoglobin. For temperatures below 0°C, which are of particular interest for cryotherapy, the accuracy of a single temperature measurement was ±1°C, which is consistent with the clinical requirements. Validation of the proposed OA temperature imaging technique was performed for slow and fast cooling of blood samples embedded in tissue-mimicking phantoms.

  13. Imaging technique for real-time temperature monitoring during cryotherapy of lesions

    PubMed Central

    Petrova, Elena; Liopo, Anton; Nadvoretskiy, Vyacheslav; Ermilov, Sergey

    2016-01-01

    Abstract. Noninvasive real-time temperature imaging during thermal therapies is able to significantly improve clinical outcomes. An optoacoustic (OA) temperature monitoring method is proposed for noninvasive real-time thermometry of vascularized tissue during cryotherapy. The universal temperature-dependent optoacoustic response (ThOR) of red blood cells (RBCs) is employed to convert reconstructed OA images to temperature maps. To obtain the temperature calibration curve for intensity-normalized OA images, we measured ThOR of 10 porcine blood samples in the range of temperatures from 40°C to −16°C and analyzed the data for single measurement variations. The nonlinearity (ΔTmax) and the temperature of zero OA response (T0) of the calibration curve were found equal to 11.4±0.1°C and −13.8±0.1°C, respectively. The morphology of RBCs was examined before and after the data collection confirming cellular integrity and intracellular compartmentalization of hemoglobin. For temperatures below 0°C, which are of particular interest for cryotherapy, the accuracy of a single temperature measurement was ±1°C, which is consistent with the clinical requirements. Validation of the proposed OA temperature imaging technique was performed for slow and fast cooling of blood samples embedded in tissue-mimicking phantoms. PMID:27822579

  14. Discussion of band selection and methodologies for the estimation of precipitable water vapour from AVIRIS data

    NASA Technical Reports Server (NTRS)

    Schanzer, Dena; Staenz, Karl

    1992-01-01

    An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data set acquired over Canal Flats, B.C., on 14 Aug. 1990, was used for the purpose of developing methodologies for surface reflectance retrieval using the 5S atmospheric code. A scene of Rogers Dry Lake, California (23 Jul. 1990), acquired within three weeks of the Canal Flats scene, was used as a potential reference for radiometric calibration purposes and for comparison with other studies using primarily LOWTRAN7. Previous attempts at surface reflectance retrieval indicated that reflectance values in the gaseous absorption bands had the poorest accuracy. Modifications to 5S to use 1 nm step size, in order to make fuller use of the 20 cm(sup -1) resolution of the gaseous absorption data, resulted in some improvement in the accuracy of the retrieved surface reflectance. Estimates of precipitable water vapor using non-linear least squares regression and simple ratioing techniques such as the CIBR (Continuum Interpolated Band Ratio) technique or the narrow/wide technique, which relate ratios of combinations of bands to precipitable water vapor through calibration curves, were found to vary widely. The estimates depended on the bands used for the estimation; none provided entirely satisfactory surface reflectance curves.

  15. Calibration of a modified temperature-light intensity logger for quantifying water electrical conductivity

    NASA Astrophysics Data System (ADS)

    Gillman, M. A.; Lamoureux, S. F.; Lafrenière, M. J.

    2017-09-01

    The Stream Temperature, Intermittency, and Conductivity (STIC) electrical conductivity (EC) logger as presented by Chapin et al. (2014) serves as an inexpensive (˜50 USD) means to assess relative EC in freshwater environments. This communication demonstrates the calibration of the STIC logger for quantifying EC, and provides examples from a month long field deployment in the High Arctic. Calibration models followed multiple nonlinear regression and produced calibration curves with high coefficient of determination values (R2 = 0.995 - 0.998; n = 5). Percent error of mean predicted specific conductance at 25°C (SpC) to known SpC ranged in magnitude from -0.6% to 13% (mean = -1.4%), and mean absolute percent error (MAPE) ranged from 2.1% to 13% (mean = 5.3%). Across all tested loggers we found good accuracy and precision, with both error metrics increasing with increasing SpC values. During 10, month-long field deployments, there were no logger failures and full data recovery was achieved. Point SpC measurements at the location of STIC loggers recorded via a more expensive commercial electrical conductivity logger followed similar trends to STIC SpC records, with 1:1.05 and 1:1.08 relationships between the STIC and commercial logger SpC values. These results demonstrate that STIC loggers calibrated to quantify EC are an economical means to increase the spatiotemporal resolution of water quality investigations.

  16. From conservative to reactive transport under diffusion-controlled conditions

    NASA Astrophysics Data System (ADS)

    Babey, Tristan; de Dreuzy, Jean-Raynald; Ginn, Timothy R.

    2016-05-01

    We assess the possibility to use conservative transport information, such as that contained in transit time distributions, breakthrough curves and tracer tests, to predict nonlinear fluid-rock interactions in fracture/matrix or mobile/immobile conditions. Reference simulated data are given by conservative and reactive transport simulations in several diffusive porosity structures differing by their topological organization. Reactions includes nonlinear kinetically controlled dissolution and desorption. Effective Multi-Rate Mass Transfer models (MRMT) are calibrated solely on conservative transport information without pore topology information and provide concentration distributions on which effective reaction rates are estimated. Reference simulated reaction rates and effective reaction rates evaluated by MRMT are compared, as well as characteristic desorption and dissolution times. Although not exactly equal, these indicators remain very close whatever the porous structure, differing at most by 0.6% and 10% for desorption and dissolution. At early times, this close agreement arises from the fine characterization of the diffusive porosity close to the mobile zone that controls fast mobile-diffusive exchanges. At intermediate to late times, concentration gradients are strongly reduced by diffusion, and reactivity can be captured by a very limited number of rates. We conclude that effective models calibrated solely on conservative transport information like MRMT can accurately estimate monocomponent kinetically controlled nonlinear fluid-rock interactions. Their relevance might extend to more advanced biogeochemical reactions because of the good characterization of conservative concentration distributions, even by parsimonious models (e.g., MRMT with 3-5 rates). We propose a methodology to estimate reactive transport from conservative transport in mobile-immobile conditions.

  17. Thickness Gauging of Single-Layer Conductive Materials with Two-Point Non Linear Calibration Algorithm

    NASA Technical Reports Server (NTRS)

    Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)

    1998-01-01

    A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.

  18. A counting-weighted calibration method for a field-programmable-gate-array-based time-to-digital converter

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Ho

    2017-05-01

    In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [-0.54, 0.24] and [-0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.

  19. New approach to calibrating bed load samplers

    USGS Publications Warehouse

    Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.

    1985-01-01

    Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.

  20. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  1. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  2. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  3. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    DOEpatents

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  4. Nonlinear normal modes modal interactions and isolated resonance curves

    DOE PAGES

    Kuether, Robert J.; Renson, L.; Detroux, T.; ...

    2015-05-21

    The objective of the present study is to explore the connection between the nonlinear normal modes of an undamped and unforced nonlinear system and the isolated resonance curves that may appear in the damped response of the forced system. To this end, an energy balance technique is used to predict the amplitude of the harmonic forcing that is necessary to excite a specific nonlinear normal mode. A cantilever beam with a nonlinear spring at its tip serves to illustrate the developments. Furthermore, the practical implications of isolated resonance curves are also discussed by computing the beam response to sine sweepmore » excitations of increasing amplitudes.« less

  5. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  6. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  7. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands.

    PubMed

    Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-08-19

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.

  8. Nonlinear elasticity in resonance experiments

    NASA Astrophysics Data System (ADS)

    Li, Xun; Sens-Schönfelder, Christoph; Snieder, Roel

    2018-04-01

    Resonant bar experiments have revealed that dynamic deformation induces nonlinearity in rocks. These experiments produce resonance curves that represent the response amplitude as a function of the driving frequency. We propose a model to reproduce the resonance curves with observed features that include (a) the log-time recovery of the resonant frequency after the deformation ends (slow dynamics), (b) the asymmetry in the direction of the driving frequency, (c) the difference between resonance curves with the driving frequency that is swept upward and downward, and (d) the presence of a "cliff" segment to the left of the resonant peak under the condition of strong nonlinearity. The model is based on a feedback cycle where the effect of softening (nonlinearity) feeds back to the deformation. This model provides a unified interpretation of both the nonlinearity and slow dynamics in resonance experiments. We further show that the asymmetry of the resonance curve is caused by the softening, which is documented by the decrease of the resonant frequency during the deformation; the cliff segment of the resonance curve is linked to a bifurcation that involves a steep change of the response amplitude when the driving frequency is changed. With weak nonlinearity, the difference between the upward- and downward-sweeping curves depends on slow dynamics; a sufficiently slow frequency sweep eliminates this up-down difference. With strong nonlinearity, the up-down difference results from both the slow dynamics and bifurcation; however, the presence of the bifurcation maintains the respective part of the up-down difference, regardless of the sweep rate.

  9. Imperfection Sensitivity of Nonlinear Vibration of Curved Single-Walled Carbon Nanotubes Based on Nonlocal Timoshenko Beam Theory

    PubMed Central

    Eshraghi, Iman; Jalali, Seyed K.; Pugno, Nicola Maria

    2016-01-01

    Imperfection sensitivity of large amplitude vibration of curved single-walled carbon nanotubes (SWCNTs) is considered in this study. The SWCNT is modeled as a Timoshenko nano-beam and its curved shape is included as an initial geometric imperfection term in the displacement field. Geometric nonlinearities of von Kármán type and nonlocal elasticity theory of Eringen are employed to derive governing equations of motion. Spatial discretization of governing equations and associated boundary conditions is performed using differential quadrature (DQ) method and the corresponding nonlinear eigenvalue problem is iteratively solved. Effects of amplitude and location of the geometric imperfection, and the nonlocal small-scale parameter on the nonlinear frequency for various boundary conditions are investigated. The results show that the geometric imperfection and non-locality play a significant role in the nonlinear vibration characteristics of curved SWCNTs. PMID:28773911

  10. SU-C-204-02: Improved Patient-Specific Optimization of the Stopping Power Calibration for Proton Therapy Planning Using a Single Proton Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE

    2015-06-15

    Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to improve dose delivery by optimizing the HU-RSP calibration curve as long as all sources of systematic incongruence are properly modeled.« less

  11. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  12. Dried blood spot analysis of creatinine with LC-MS/MS in addition to immunosuppressants analysis.

    PubMed

    Koster, Remco A; Greijdanus, Ben; Alffenaar, Jan-Willem C; Touw, Daan J

    2015-02-01

    In order to monitor creatinine levels or to adjust the dosage of renally excreted or nephrotoxic drugs, the analysis of creatinine in dried blood spots (DBS) could be a useful addition to DBS analysis. We developed a LC-MS/MS method for the analysis of creatinine in the same DBS extract that was used for the analysis of tacrolimus, sirolimus, everolimus, and cyclosporine A in transplant patients with the use of Whatman FTA DMPK-C cards. The method was validated using three different strategies: a seven-point calibration curve using the intercept of the calibration to correct for the natural presence of creatinine in reference samples, a one-point calibration curve at an extremely high concentration in order to diminish the contribution of the natural presence of creatinine, and the use of creatinine-[(2)H3] with an eight-point calibration curve. The validated range for creatinine was 120 to 480 μmol/L (seven-point calibration curve), 116 to 7000 μmol/L (1-point calibration curve), and 1.00 to 400.0 μmol/L for creatinine-[(2)H3] (eight-point calibration curve). The precision and accuracy results for all three validations showed a maximum CV of 14.0% and a maximum bias of -5.9%. Creatinine in DBS was found stable at ambient temperature and 32 °C for 1 week and at -20 °C for 29 weeks. Good correlations were observed between patient DBS samples and routine enzymatic plasma analysis and showed the capability of the DBS method to be used as an alternative for creatinine plasma measurement.

  13. Calibration of thermocouple psychrometers and moisture measurements in porous materials

    NASA Astrophysics Data System (ADS)

    Guz, Łukasz; Sobczuk, Henryk; Połednik, Bernard; Guz, Ewa

    2016-07-01

    The paper presents in situ method of peltier psychrometric sensors calibration which allow to determine water potential. Water potential can be easily recalculated into moisture content of the porous material. In order to obtain correct results of water potential, each probe should be calibrated. NaCl salt solutions with molar concentration of 0.4M, 0.7M, 1.0M and 1.4M, were used for calibration which enabled to obtain osmotic potential in range: -1791 kPa to -6487 kPa. Traditionally, the value of voltage generated on thermocouples during wet-bulb temperature depression is calculated in order to determine the calibration function for psychrometric in situ sensors. In the new method of calibration, the field under psychrometric curve along with peltier cooling current and duration was taken into consideration. During calibration, different cooling currents were applied for each salt solution, i.e. 3, 5, 8 mA respectively, as well as different cooling duration for each current (from 2 to 100 sec with 2 sec step). Afterwards, the shape of each psychrometric curve was thoroughly examined and a value of field under psychrometric curve was computed. Results of experiment indicate that there is a robust correlation between field under psychrometric curve and water potential. Calibrations formulas were designated on the basis of these features.

  14. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  15. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  16. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  17. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  18. Quantitative spatial distribution of sirolimus and polymers in drug-eluting stents using confocal Raman microscopy.

    PubMed

    Balss, K M; Llanos, G; Papandreou, G; Maryanoff, C A

    2008-04-01

    Raman spectroscopy was used to differentiate each component found in the CYPHER Sirolimus-eluting Coronary Stent. The unique spectral features identified for each component were then used to develop three separate calibration curves to describe the solid phase distribution found on drug-polymer coated stents. The calibration curves were obtained by analyzing confocal Raman spectral depth profiles from a set of 16 unique formulations of drug-polymer coatings sprayed onto stents and planar substrates. The sirolimus model was linear from 0 to 100 wt % of drug. The individual polymer calibration curves for poly(ethylene-co-vinyl acetate) [PEVA] and poly(n-butyl methacrylate) [PBMA] were also linear from 0 to 100 wt %. The calibration curves were tested on three independent drug-polymer coated stents. The sirolimus calibration predicted the drug content within 1 wt % of the laboratory assay value. The polymer calibrations predicted the content within 7 wt % of the formulation solution content. Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra from five formulations confirmed a linear response to changes in sirolimus and polymer content. Copyright 2007 Wiley Periodicals, Inc.

  19. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  20. Forming limit strains for non-linear strain path of AA6014 aluminium sheet deformed at room temperature

    NASA Astrophysics Data System (ADS)

    Bressan, José Divo; Liewald, Mathias; Drotleff, Klaus

    2017-10-01

    Forming limit strain curves of conventional aluminium alloy AA6014 sheets after loading with non-linear strain paths are presented and compared with D-Bressan macroscopic model of sheet metal rupture by critical shear stress criterion. AA6014 exhibits good formability at room temperature and, thus, is mainly employed in car body external parts by manufacturing at room temperature. According to Weber et al., experimental bi-linear strain paths were carried out in specimens with 1mm thickness by pre-stretching in uniaxial and biaxial directions up to 5%, 10% and 20% strain levels before performing Nakajima testing experiments to obtain the forming limit strain curves, FLCs. In addition, FLCs of AA6014 were predicted by employing D-Bressan critical shear stress criterion for bi-linear strain path and comparisons with the experimental FLCs were analyzed and discussed. In order to obtain the material coefficients of plastic anisotropy, strain and strain rate hardening behavior and calibrate the D-Bressan model, tensile tests, two different strain rate on specimens cut at 0°, 45° and 90° to the rolling direction and also bulge test were carried out at room temperature. The correlation of experimental bi-linear strain path FLCs is reasonably good with the predicted limit strains from D-Bressan model, assuming equivalent pre-strain calculated by Hill 1979 yield criterion.

  1. Numerical simulations of flow fields through conventionally controlled wind turbines & wind farms

    NASA Astrophysics Data System (ADS)

    Emre Yilmaz, Ali; Meyers, Johan

    2014-06-01

    In the current study, an Actuator-Line Model (ALM) is implemented in our in-house pseudo-spectral LES solver SP-WIND, including a turbine controller. Below rated wind speed, turbines are controlled by a standard-torque-controller aiming at maximum power extraction from the wind. Above rated wind speed, the extracted power is limited by a blade pitch controller which is based on a proportional-integral type control algorithm. This model is used to perform a series of single turbine and wind farm simulations using the NREL 5MW turbine. First of all, we focus on below-rated wind speed, and investigate the effect of the farm layout on the controller calibration curves. These calibration curves are expressed in terms of nondimensional torque and rotational speed, using the mean turbine-disk velocity as reference. We show that this normalization leads to calibration curves that are independent of wind speed, but the calibration curves do depend on the farm layout, in particular for tightly spaced farms. Compared to turbines in a lone-standing set-up, turbines in a farm experience a different wind distribution over the rotor due to the farm boundary-layer interaction. We demonstrate this for fully developed wind-farm boundary layers with aligned turbine arrangements at different spacings (5D, 7D, 9D). Further we also compare calibration curves obtained from full farm simulations with calibration curves that can be obtained at a much lower cost using a minimal flow unit.

  2. Multi-parameter Nonlinear Gain Correction of X-ray Transition Edge Sensors for the X-ray Integral Field Unit

    NASA Astrophysics Data System (ADS)

    Cucchetti, E.; Eckart, M. E.; Peille, P.; Porter, F. S.; Pajot, F.; Pointecouteau, E.

    2018-04-01

    With its array of 3840 Transition Edge Sensors (TESs), the Athena X-ray Integral Field Unit (X-IFU) will provide spatially resolved high-resolution spectroscopy (2.5 eV up to 7 keV) from 0.2 to 12 keV, with an absolute energy scale accuracy of 0.4 eV. Slight changes in the TES operating environment can cause significant variations in its energy response function, which may result in systematic errors in the absolute energy scale. We plan to monitor such changes at pixel level via onboard X-ray calibration sources and correct the energy scale accordingly using a linear or quadratic interpolation of gain curves obtained during ground calibration. However, this may not be sufficient to meet the 0.4 eV accuracy required for the X-IFU. In this contribution, we introduce a new two-parameter gain correction technique, based on both the pulse-height estimate of a fiducial line and the baseline value of the pixels. Using gain functions that simulate ground calibration data, we show that this technique can accurately correct deviations in detector gain due to changes in TES operating conditions such as heat sink temperature, bias voltage, thermal radiation loading and linear amplifier gain. We also address potential optimisations of the onboard calibration source and compare the performance of this new technique with those previously used.

  3. Limits to lichenometry

    NASA Astrophysics Data System (ADS)

    Rosenwinkel, Swenja; Korup, Oliver; Landgraf, Angela; Dzhumabaeva, Atyrgul

    2015-12-01

    Lichenometry is a straightforward and inexpensive method for dating Holocene rock surfaces. The rationale is that the diameter of the largest lichen scales with the age of the originally fresh rock surface that it colonised. The success of the method depends on finding the largest lichen diameters, a suitable lichen-growth model, and a robust calibration curve. Recent critique of the method motivates us to revisit the accuracy and uncertainties of lichenometry. Specifically, we test how well lichenometry is capable of resolving the ages of different lobes of large active rock glaciers in the Kyrgyz Tien Shan. We use a bootstrapped quantile regression to calibrate local growth curves of Xanthoria elegans, Aspicilia tianshanica, and Rhizocarpon geographicum, and report a nonlinear decrease in dating accuracy with increasing lichen diameter. A Bayesian type of an analysis of variance demonstrates that our calibration allows discriminating credibly between rock-glacier lobes of different ages despite the uncertainties tied to sample size and correctly identifying the largest lichen thalli. Our results also show that calibration error grows with lichen size, so that the separability of rock-glacier lobes of different ages decreases, while the tendency to assign coeval ages increases. The abundant young (<200 yr) specimen of fast-growing X. elegans are in contrast with the fewer, slow-growing, but older (200-1500 yr) R. geographicum and A. tianshanica, and record either a regional reactivation of lobes in the past 200 years, or simply a censoring effect of lichen mortality during early phases of colonisation. The high variance of lichen sizes captures the activity of rock-glacier lobes, which is difficult to explain by regional climatic cooling or earthquake triggers alone. Therefore, we caution against inferring palaeoclimatic conditions from the topographic position of rock-glacier lobes. We conclude that lichenometry works better as a tool for establishing a relative, rather than an absolute, chronology of rock-glacier lobes in the northern Tien Shan.

  4. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    NASA Astrophysics Data System (ADS)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  5. Comment on "Radiocarbon Calibration Curve Spanning 0 to 50,000 Years B.P. Based on Paired 230Th/234U/238U and 14C Dates on Pristine Corals" by R.G. Fairbanks, R. A. Mortlock, T.-C. Chiu, L. Cao, A. Kaplan, T. P. Guilderson, T. W. Fairbanks, A. L. Bloom, P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimer, P J; Baillie, M L; Bard, E

    2005-10-02

    Radiocarbon calibration curves are essential for converting radiocarbon dated chronologies to the calendar timescale. Prior to the 1980's numerous differently derived calibration curves based on radiocarbon ages of known age material were in use, resulting in ''apples and oranges'' comparisons between various records (Klein et al., 1982), further complicated by until then unappreciated inter-laboratory variations (International Study Group, 1982). The solution was to produce an internationally-agreed calibration curve based on carefully screened data with updates at 4-6 year intervals (Klein et al., 1982; Stuiver and Reimer, 1986; Stuiver and Reimer, 1993; Stuiver et al., 1998). The IntCal working group hasmore » continued this tradition with the active participation of researchers who produced the records that were considered for incorporation into the current, internationally-ratified calibration curves, IntCal04, SHCal04, and Marine04, for Northern Hemisphere terrestrial, Southern Hemisphere terrestrial, and marine samples, respectively (Reimer et al., 2004; Hughen et al., 2004; McCormac et al., 2004). Fairbanks et al. (2005), accompanied by a more technical paper, Chiu et al. (2005), and an introductory comment, Adkins (2005), recently published a ''calibration curve spanning 0-50,000 years''. Fairbanks et al. (2005) and Chiu et al. (2005) have made a significant contribution to the database on which the IntCal04 and Marine04 calibration curves are based. These authors have now taken the further step to derive their own radiocarbon calibration extending to 50,000 cal BP, which they claim is superior to that generated by the IntCal working group. In their papers, these authors are strongly critical of the IntCal calibration efforts for what they claim to be inadequate screening and sample pretreatment methods. While these criticisms may ultimately be helpful in identifying a better set of protocols, we feel that there are also several erroneous and misleading statements made by these authors which require a response by the IntCal working group. Furthermore, we would like to comment on the sample selection criteria, pretreatment methods, and statistical methods utilized by Fairbanks et al. in derivation of their own radiocarbon calibration.« less

  6. Curved Displacement Transfer Functions for Geometric Nonlinear Large Deformation Structure Shape Predictions

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Fleischer, Van Tran; Lung, Shun-Fat

    2017-01-01

    For shape predictions of structures under large geometrically nonlinear deformations, Curved Displacement Transfer Functions were formulated based on a curved displacement, traced by a material point from the undeformed position to deformed position. The embedded beam (depth-wise cross section of a structure along a surface strain-sensing line) was discretized into multiple small domains, with domain junctures matching the strain-sensing stations. Thus, the surface strain distribution could be described with a piecewise linear or a piecewise nonlinear function. The discretization approach enabled piecewise integrations of the embedded-beam curvature equations to yield the Curved Displacement Transfer Functions, expressed in terms of embedded beam geometrical parameters and surface strains. By entering the surface strain data into the Displacement Transfer Functions, deflections along each embedded beam can be calculated at multiple points for mapping the overall structural deformed shapes. Finite-element linear and nonlinear analyses of a tapered cantilever tubular beam were performed to generate linear and nonlinear surface strains and the associated deflections to be used for validation. The shape prediction accuracies were then determined by comparing the theoretical deflections with the finiteelement- generated deflections. The results show that the newly developed Curved Displacement Transfer Functions are very accurate for shape predictions of structures under large geometrically nonlinear deformations.

  7. Quantitative methods for compensation of matrix effects and self-absorption in Laser Induced Breakdown Spectroscopy signals of solids

    NASA Astrophysics Data System (ADS)

    Takahashi, Tomoko; Thornton, Blair

    2017-12-01

    This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.

  8. Converting HAZUS capacity curves to seismic hazard-compatible building fragility functions: effect of hysteretic models

    USGS Publications Warehouse

    Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem

    2008-01-01

    A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.

  9. Calibration of the advanced microwave sounding unit-A for NOAA-K

    NASA Technical Reports Server (NTRS)

    Mo, Tsan

    1995-01-01

    The thermal-vacuum chamber calibration data from the Advanced Microwave Sounding Unit-A (AMSU-A) for NOAA-K, which will be launched in 1996, were analyzed to evaluate the instrument performance, including calibration accuracy, nonlinearity, and temperature sensitivity. The AMSU-A on NOAA-K consists of AMSU-A2 Protoflight Model and AMSU-A1 Flight Model 1. The results show that both models meet the instrument specifications, except the AMSU-A1 antenna beamwidths, which exceed the requirement of 3.3 +/- 10%. We also studied the instrument's radiometric characterizations which will be incorporated into the operational calibration algorithm for processing the in-orbit AMSU-A data from space. Particularly, the nonlinearity parameters which will be used for correcting the nonlinear contributions from an imperfect square-law detector were determined from this data analysis. It was found that the calibration accuracies (differences between the measured scene radiances and those calculated from a linear two-point calibration formula) are polarization-dependent. Channels with vertical polarizations show little cold biases at the lowest scene target temperature 84K, while those with horizontal polarizations all have appreciable cold biases, which can be up to 0.6K. It is unknown where these polarization-dependent cold biases originate, but it is suspected that some chamber contamination of hot radiances leaked into the cold scene target area. Further investigation in this matter is required. The existence and magnitude of nonlinearity in each channel were established and a quadratic formula for modeling these nonlinear contributions was developed. The model was characterized by a single parameter u, values of which were obtained for each channel via least-squares fit to the data. Using the best-fit u values, we performed a series of simulations of the quadratic corrections which would be expected from the space data after the launch of AMSU-A on NOAA-K. In these simulations, the cosmic background radiance corresponding to a cold space temperature 2.73K was adopted as one of the two reference points of calibration. The largest simulated nonlinear correction is about 0.3K, which occurs at channel 15 when the instrument temperature is at 38.09 deg C. Others are less than 0.2K in the remaining channels. Possible improvement for future instrument calibration is also discussed.

  10. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  11. Isogeometric analysis of free-form Timoshenko curved beams including the nonlinear effects of large deformations

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Farhad; Hashemian, Ali; Moetakef-Imani, Behnam; Hadidimoud, Saied

    2018-03-01

    In the present paper, the isogeometric analysis (IGA) of free-form planar curved beams is formulated based on the nonlinear Timoshenko beam theory to investigate the large deformation of beams with variable curvature. Based on the isoparametric concept, the shape functions of the field variables (displacement and rotation) in a finite element analysis are considered to be the same as the non-uniform rational basis spline (NURBS) basis functions defining the geometry. The validity of the presented formulation is tested in five case studies covering a wide range of engineering curved structures including from straight and constant curvature to variable curvature beams. The nonlinear deformation results obtained by the presented method are compared to well-established benchmark examples and also compared to the results of linear and nonlinear finite element analyses. As the nonlinear load-deflection behavior of Timoshenko beams is the main topic of this article, the results strongly show the applicability of the IGA method to the large deformation analysis of free-form curved beams. Finally, it is interesting to notice that, until very recently, the large deformations analysis of free-form Timoshenko curved beams has not been considered in IGA by researchers.

  12. Non-linear Growth Models in Mplus and SAS

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  13. Construction of dose response calibration curves for dicentrics and micronuclei for X radiation in a Serbian population.

    PubMed

    Pajic, J; Rakic, B; Jovicic, D; Milovanovic, A

    2014-10-01

    Biological dosimetry using chromosome damage biomarkers is a valuable dose assessment method in cases of radiation overexposure with or without physical dosimetry data. In order to estimate dose by biodosimetry, any biological dosimetry service have to have its own dose response calibration curve. This paper reveals the results obtained after irradiation of blood samples from fourteen healthy male and female volunteers in order to establish biodosimetry in Serbia and produce dose response calibration curves for dicentrics and micronuclei. Taking into account pooled data from all the donors, the resultant fitted curve for dicentrics is: Ydic=0.0009 (±0.0003)+0.0421 (±0.0042)×D+0.0602 (±0.0022)×D(2); and for micronuclei: Ymn=0.0104 (±0.0015)+0.0824 (±0.0050)×D+0.0189 (±0.0017)×D(2). Following establishment of the dose response curve, a validation experiment was carried out with four blood samples. Applied and estimated doses were in good agreement. On this basis, the results reported here give us confidence to apply both calibration curves for future biological dosimetry requirements in Serbia. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devic, Slobodan; Tomic, Nada; Aldelaijan, Saad

    Purpose: Despite numerous advantages of radiochromic film dosimeter (high spatial resolution, near tissue equivalence, low energy dependence) to measure a relative dose distribution with film, one needs to first measure an absolute dose (following previously established reference dosimetry protocol) and then convert measured absolute dose values into relative doses. In this work, we present result of our efforts to obtain a functional form that would linearize the inherently nonlinear dose-response curve of the radiochromic film dosimetry system. Methods: Functional form [{zeta}= (-1){center_dot}netOD{sup (2/3)}/ln(netOD)] was derived from calibration curves of various previously established radiochromic film dosimetry systems. In order to testmore » the invariance of the proposed functional form with respect to the film model used we tested it with three different GAFCHROMIC Trade-Mark-Sign film models (EBT, EBT2, and EBT3) irradiated to various doses and scanned on a same scanner. For one of the film models (EBT2), we tested the invariance of the functional form to the scanner model used by scanning irradiated film pieces with three different flatbed scanner models (Epson V700, 1680, and 10000XL). To test our hypothesis that the proposed functional argument linearizes the response of the radiochromic film dosimetry system, verification tests have been performed in clinical applications: percent depth dose measurements, IMRT quality assurance (QA), and brachytherapy QA. Results: Obtained R{sup 2} values indicate that the choice of the functional form of the new argument appropriately linearizes the dose response of the radiochromic film dosimetry system we used. The linear behavior was insensitive to both film model and flatbed scanner model used. Measured PDD values using the green channel response of the GAFCHROMIC Trade-Mark-Sign EBT3 film model are well within {+-}2% window of the local relative dose value when compared to the tabulated Cobalt-60 data. It was also found that criteria of 3%/3 mm for an IMRT QA plan and 3%/2 mm for a brachytherapy QA plan are passing 95% gamma function points. Conclusions: In this paper, we demonstrate the use of functional argument to linearize the inherently nonlinear response of a radiochromic film based reference dosimetry system. In this way, relative dosimetry can be conveniently performed using radiochromic film dosimetry system without the need of establishing calibration curve.« less

  15. Detection and quantification of a toxic salt substitute (LiCl) by using laser induced breakdown spectroscopy (LIBS).

    PubMed

    Sezer, Banu; Velioglu, Hasan Murat; Bilge, Gonca; Berkkan, Aysel; Ozdinc, Nese; Tamer, Ugur; Boyaci, Ismail Hakkı

    2018-01-01

    The use of Li salts in foods has been prohibited due to their negative effects on central nervous system; however, they might still be used especially in meat products as Na substitutes. Lithium can be toxic and even lethal at higher concentrations and it is not approved in foods. The present study focuses on Li analysis in meatballs by using laser induced breakdown spectroscopy (LIBS). Meatball samples were analyzed using LIBS and flame atomic absorption spectroscopy. Calibration curves were obtained by utilizing Li emission lines at 610nm and 670nm for univariate calibration. The results showed that Li calibration curve at 670nm provided successful determination of Li with 0.965 of R 2 and 4.64ppm of limit of detection (LOD) value. While Li Calibration curve obtained using emission line at 610nm generated R 2 of 0.991 and LOD of 22.6ppm, calibration curve obtained at 670nm below 1300ppm generated R 2 of 0.965 and LOD of 4.64ppm. Copyright © 2017. Published by Elsevier Ltd.

  16. A new method for automated dynamic calibration of tipping-bucket rain gauges

    USGS Publications Warehouse

    Humphrey, M.D.; Istok, J.D.; Lee, J.Y.; Hevesi, J.A.; Flint, A.L.

    1997-01-01

    Existing methods for dynamic calibration of tipping-bucket rain gauges (TBRs) can be time consuming and labor intensive. A new automated dynamic calibration system has been developed to calibrate TBRs with minimal effort. The system consists of a programmable pump, datalogger, digital balance, and computer. Calibration is performed in two steps: 1) pump calibration and 2) rain gauge calibration. Pump calibration ensures precise control of water flow rates delivered to the rain gauge funnel; rain gauge calibration ensures precise conversion of bucket tip times to actual rainfall rates. Calibration of the pump and one rain gauge for 10 selected pump rates typically requires about 8 h. Data files generated during rain gauge calibration are used to compute rainfall intensities and amounts from a record of bucket tip times collected in the field. The system was tested using 5 types of commercial TBRs (15.2-, 20.3-, and 30.5-cm diameters; 0.1-, 0.2-, and 1.0-mm resolutions) and using 14 TBRs of a single type (20.3-cm diameter; 0.1-mm resolution). Ten pump rates ranging from 3 to 154 mL min-1 were used to calibrate the TBRs and represented rainfall rates between 6 and 254 mm h-1 depending on the rain gauge diameter. All pump calibration results were very linear with R2 values greater than 0.99. All rain gauges exhibited large nonlinear underestimation errors (between 5% and 29%) that decreased with increasing rain gauge resolution and increased with increasing rainfall rate, especially for rates greater than 50 mm h-1. Calibration curves of bucket tip time against the reciprocal of the true pump rate for all rain gauges also were linear with R2 values of 0.99. Calibration data for the 14 rain gauges of the same type were very similar, as indicated by slope values that were within 14% of each other and ranged from about 367 to 417 s mm h-1. The developed system can calibrate TBRs efficiently, accurately, and virtually unattended and could be modified for use with other rain gauge designs. The system is now in routine use to calibrate TBRs in a large rainfall collection network at Yucca Mountain, Nevada.

  17. Radiometric calibration of the vacuum-ultraviolet spectrograph SUMER on the SOHO spacecraft with the B detector.

    PubMed

    Schühle, U; Curdt, W; Hollandt, J; Feldman, U; Lemaire, P; Wilhelm, K

    2000-01-20

    The Solar Ultraviolet Measurement of Emitted Radiation (SUMER) vacuum-ultraviolet spectrograph was calibrated in the laboratory before the integration of the instrument on the Solar and Heliospheric Observatory (SOHO) spacecraft in 1995. During the scientific operation of the SOHO it has been possible to track the radiometric calibration of the SUMER spectrograph since March 1996 by a strategy that employs various methods to update the calibration status and improve the coverage of the spectral calibration curve. The results for the A Detector were published previously [Appl. Opt. 36, 6416 (1997)]. During three years of operation in space, the B detector was used for two and one-half years. We describe the characteristics of the B detector and present results of the tracking and refinement of the spectral calibration curves with it. Observations of the spectra of the stars alpha and rho Leonis permit an extrapolation of the calibration curves in the range from 125 to 149.0 nm. Using a solar coronal spectrum observed above the solar disk, we can extrapolate the calibration curves by measuring emission line pairs with well-known intensity ratios. The sensitivity ratio of the two photocathode areas can be obtained by registration of many emission lines in the entire spectral range on both KBr-coated and bare parts of the detector's active surface. The results are found to be consistent with the published calibration performed in the laboratory in the wavelength range from 53 to 124 nm. We can extrapolate the calibration outside this range to 147 nm with a relative uncertainty of ?30% (1varsigma) for wavelengths longer than 125 nm and to 46.5 nm with 50% uncertainty for the short-wavelength range below 53 nm.

  18. Magnetic-sensor performance evaluated from magneto-conductance curve in magnetic tunnel junctions using in-plane or perpendicularly magnetized synthetic antiferromagnetic reference layers

    NASA Astrophysics Data System (ADS)

    Nakano, T.; Oogane, M.; Furuichi, T.; Ando, Y.

    2018-04-01

    The automotive industry requires magnetic sensors exhibiting highly linear output within a dynamic range as wide as ±1 kOe. A simple model predicts that the magneto-conductance (G-H) curve in a magnetic tunnel junction (MTJ) is perfectly linear, whereas the magneto-resistance (R-H) curve inevitably contains a finite nonlinearity. We prepared two kinds of MTJs using in-plane or perpendicularly magnetized synthetic antiferromagnetic (i-SAF or p-SAF) reference layers and investigated their sensor performance. In the MTJ with the i-SAF reference layer, the G-H curve did not necessarily show smaller nonlinearities than those of the R-H curve with different dynamic ranges. This is because the magnetizations of the i-SAF reference layer start to rotate at a magnetic field even smaller than the switching field (Hsw) measured by a magnetometer, which significantly affects the tunnel magnetoresistance (TMR) effect. In the MTJ with the p-SAF reference layer, the G-H curve showed much smaller nonlinearities than those of the R-H curve, thanks to a large Hsw value of the p-SAF reference layer. We achieved a nonlinearity of 0.08% FS (full scale) in the G-H curve with a dynamic range of ±1 kOe, satisfying our target for automotive applications. This demonstrated that a reference layer exhibiting a large Hsw value is indispensable in order to achieve a highly linear G-H curve.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Li, X; Liu, G

    Purpose: We compare and investigate the dosimetric impacts on pencil beam scanning (PBS) proton treatment plans generated with CT calibration curves from four different CT scanners and one averaged ‘global’ CT calibration curve. Methods: The four CT scanners are located at three different hospital locations within the same health system. CT density calibration curves were collected from these scanners using the same CT calibration phantom and acquisition parameters. Mass density to HU value tables were then commissioned in a commercial treatment planning system. Five disease sites were chosen for dosimetric comparisons at brain, lung, head and neck, adrenal, and prostate.more » Three types of PBS plans were generated at each treatment site using SFUD, IMPT, and robustness optimized IMPT techniques. 3D dose differences were investigated using 3D Gamma analysis. Results: The CT calibration curves for all four scanners display very similar shapes. Large HU differences were observed at both the high HU and low HU regions of the curves. Large dose differences were generally observed at the distal edges of the beams and they are beam angle dependent. Out of the five treatment sites, lung plans exhibits the most overall range uncertainties and prostate plans have the greatest dose discrepancy. There are no significant differences between the SFUD, IMPT, and the RO-IMPT methods. 3D gamma analysis with 3%, 3 mm criteria showed all plans with greater than 95% passing rate. Two of the scanners with close HU values have negligible dose difference except for lung. Conclusion: Our study shows that there are more than 5% dosimetric differences between different CT calibration curves. PBS treatment plans generated with SFUD, IMPT, and the robustness optimized IMPT has similar sensitivity to the CT density uncertainty. More patient data and tighter gamma criteria based on structure location and size will be used for further investigation.« less

  20. A microfluidic approach for hemoglobin detection in whole blood

    NASA Astrophysics Data System (ADS)

    Taparia, Nikita; Platten, Kimsey C.; Anderson, Kristin B.; Sniadecki, Nathan J.

    2017-10-01

    Diagnosis of anemia relies on the detection of hemoglobin levels in a blood sample. Conventional blood analyzers are not readily available in most low-resource regions where anemia is prevalent, so detection methods that are low-cost and point-of-care are needed. Here, we present a microfluidic approach to measure hemoglobin concentration in a sample of whole blood. Unlike conventional approaches, our microfluidic approach does not require hemolysis. We detect the level of hemoglobin in a blood sample optically by illuminating the blood in a microfluidic channel at a peak wavelength of 540 nm and measuring its absorbance using a CMOS sensor coupled with a lens to magnify the image onto the detector. We compare measurements in microchannels with channel heights of 50 and 115 μm and found the channel with the 50 μm height provided a better range of detection. Since we use whole blood and not lysed blood, we fit our data to an absorption model that includes optical scattering in order to obtain a calibration curve for our system. Based on this calibration curve and data collected, we can measure hemoglobin concentration within 1 g/dL for severe cases of anemia. In addition, we measured optical density for blood flowing at a shear rate of 500 s-1 and observed it did not affect the nonlinear model. With this method, we provide an approach that uses microfluidic detection of hemoglobin levels that can be integrated with other microfluidic approaches for blood analysis.

  1. Calibration of the forward-scattering spectrometer probe - Modeling scattering from a multimode laser beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  2. Calibration of the Forward-scattering Spectrometer Probe: Modeling Scattering from a Multimode Laser Beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  3. GIADA: extended calibration activity: . the Electrostatic Micromanipulator

    NASA Astrophysics Data System (ADS)

    Sordini, R.; Accolla, M.; Della Corte, V.; Rotundi, A.

    GIADA (Grain Impact Analyser and Dust Accumulator), one of the scientific instruments onboard Rosetta/ESA space mission, is devoted to study dynamical properties of dust particles ejected by the short period comet 67P/Churyumov-Gerasimenko. In preparation for the scientific phase of the mission, we are performing laboratory calibration activities on the GIADA Proto Flight Model (PFM), housed in a clean room in our laboratory. Aim of the calibration activity is to characterize the response curve of the GIADA measurement sub-systems. These curves are then correlated with the calibration curves obtained for the GIADA payload onboard the Rosetta S/C. The calibration activity involves two of three sub-systems constituting GIADA: Grain Detection System (GDS) and Impact Sensor (IS). To get reliable calibration curves, a statistically relevant number of grains have to be dropped or shot into the GIADA instrument. Particle composition, structure, size, optical properties and porosity have been selected in order to obtain realistic cometary dust analogues. For each selected type of grain, we estimated that at least one hundred of shots are needed to obtain a calibration curve. In order to manipulate such a large number of particles, we have designed and developed an innovative electrostatic system able to capture, manipulate and shoot particles with sizes in the range 20 - 500 μm. The electrostatic Micromanipulator (EM) is installed on a manual handling system composed by X-Y-Z micrometric slides with a 360o rotational stage along Z, and mounted on a optical bench. In the present work, we display the tests on EM using ten different materials with dimension in the range 50 - 500 μm: the experimental results are in compliance with the requirements.

  4. A weakly nonlinear theory for wave-vortex interactions in curved channel flow

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.; Erlebacher, Gordon; Zang, Thomas A.

    1992-01-01

    A weakly nonlinear theory is developed to study the interaction of Tollmien-Schlichting (TS) waves and Dean vortices in curved channel flow. The predictions obtained from the theory agree well with results obtained from direct numerical simulations of curved channel flow, especially for low amplitude disturbances. Some discrepancies in the results of a previous theory with direct numerical simulations are resolved.

  5. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  6. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  7. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  8. INFLUENCE OF IRON CHELATION ON R1 AND R2 CALIBRATION CURVES IN GERBIL LIVER AND HEART

    PubMed Central

    Wood, John C.; Aguilar, Michelle; Otto-Duessel, Maya; Nick, Hanspeter; Nelson, Marvin D.; Moats, Rex

    2008-01-01

    MRI is gaining increasing importance for the noninvasive quantification of organ iron burden. Since transverse relaxation rates depend on iron distribution as well as iron concentration, physiologic and pharmacologic processes that alter iron distribution could change MRI calibration curves. This paper compares the effect of three iron chelators, deferoxamine, deferiprone, and deferasirox on R1 and R2 calibration curves according to two iron loading and chelation strategies. 33 Mongolian gerbils underwent iron loading (iron dextran 500 mg/kg/wk) for 3 weeks followed by 4 weeks of chelation. An additional 56 animals received less aggressive loading (200 mg/kg/week) for 10 weeks, followed by 12 weeks of chelation. R1 and R2 calibration curves were compared to results from 23 iron-loaded animals that had not received chelation. Acute iron loading and chelation biased R1 and R2 from the unchelated reference calibration curves but chelator-specific changes were not observed, suggesting physiologic rather than pharmacologic differences in iron distribution. Long term chelation deferiprone treatment increased liver R1 50% (p<0.01), while long term deferasirox lowered liver R2 30.9% (p<0.0001). The relationship between R1 and R2 and organ iron concentration may depend upon the acuity of iron loading and unloading as well as the iron chelator administered. PMID:18581418

  9. Quantification of calcium using localized normalization on laser-induced breakdown spectroscopy data

    NASA Astrophysics Data System (ADS)

    Sabri, Nursalwanie Mohd; Haider, Zuhaib; Tufail, Kashif; Aziz, Safwan; Ali, Jalil; Wahab, Zaidan Abdul; Abbas, Zulkifly

    2017-03-01

    This paper focuses on localized normalization for improved calibration curves in laser-induced breakdown spectroscopy (LIBS) measurements. The calibration curves have been obtained using five samples consisting of different concentrations of calcium (Ca) in potassium bromide (KBr) matrix. The work has utilized Q-switched Nd:YAG laser installed in LIBS2500plus system with fundamental wavelength and laser energy of 650 mJ. Optimization of gate delay can be obtained from signal-to-background ratio (SBR) of Ca II 315.9 and 317.9 nm. The optimum conditions are determined in which having high spectral intensity and SBR. The highest spectral lines of ionic and emission lines of Ca at gate delay of 0.83 µs. From SBR, the optimized gate delay is at 5.42 µs for both Ca II spectral lines. Calibration curves consist of three parts; original intensity from LIBS experimentation, normalization and localized normalization of the spectral line intensity. The R2 values of the calibration curves plotted using locally normalized intensities of Ca I 610.3, 612.2 and 616.2 nm spectral lines are 0.96329, 0.97042, and 0.96131, respectively. The enhancement from calibration curves using the regression coefficient allows more accurate analysis in LIBS. At the request of all authors of the paper, and with the agreement of the Proceedings Editor, an updated version of this article was published on 24 May 2017.

  10. Quantification of sunscreen ethylhexyl triazone in topical skin-care products by normal-phase TLC/densitometry.

    PubMed

    Sobanska, Anna W; Pyzowski, Jaroslaw

    2012-01-01

    Ethylhexyl triazone (ET) was separated from other sunscreens such as avobenzone, octocrylene, octyl methoxycinnamate, and diethylamino hydroxybenzoyl hexyl benzoate and from parabens by normal-phase HPTLC on silica gel 60 as stationary phase. Two mobile phases were particularly effective: (A) cyclohexane-diethyl ether 1 : 1 (v/v) and (B) cyclohexane-diethyl ether-acetone 15 : 1 : 2 (v/v/v) since apart from ET analysis they facilitated separation and quantification of other sunscreens present in the formulations. Densitometric scanning was performed at 300 nm. Calibration curves for ET were nonlinear (second-degree polynomials), with R > 0.998. For both mobile phases limits of detection (LOD) were 0.03 and limits of quantification (LOQ) 0.1 μg spot(-1). Both methods were validated.

  11. Implications of Nonlinear Concentration Response Curve for Ozone related Mortality on Health Burden Assessment

    EPA Science Inventory

    We characterize the sensitivity of the ozone attributable health burden assessment with respect to different modeling strategies of concentration-response function. For this purpose, we develop a flexible Bayesian hierarchical model allowing for a nonlinear ozone risk curve with ...

  12. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  13. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  14. Increasing the sensitivity of the Jaffe reaction for creatinine

    NASA Technical Reports Server (NTRS)

    Tom, H. Y.

    1973-01-01

    Study of analytical procedure has revealed that linearity of creatinine calibration curve can be extended by using 0.03 molar picric acid solution made up in 70 percent ethanol instead of water. Three to five times more creatinine concentration can be encompassed within linear portion of calibration curve.

  15. Marine04 Marine radiocarbon age calibration, 26 ? 0 ka BP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughen, K; Baille, M; Bard, E

    2004-11-01

    New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration datasets extend an additional 2000 years, from 0-26 ka cal BP (Before Present, 0 cal BP = AD 1950), and provide much higher resolution, greater precision and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically dated tree-ring samples, converted with a box-diffusion model to marine mixed-layer ages, cover the period from 0-10.5 ka cal BP. Beyond 10.5 ka cal BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals.more » The marine records are corrected with site-specific {sup 14}C reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 ka cal BP. A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the radiocarbon age to calculate the underlying calibration curve. The marine datasets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring datasets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al.« less

  16. Nonlinear Growth Curves in Developmental Research

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki

    2011-01-01

    Developmentalists are often interested in understanding change processes, and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and…

  17. Bandwidth increasing mechanism by introducing a curve fixture to the cantilever generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Weiqun, E-mail: weiqunliu@home.swjtu.edu.cn; Liu, Congzhi; Ren, Bingyu

    2016-07-25

    A nonlinear wideband generator architecture by clamping the cantilever beam generator with a curve fixture is proposed. Devices with different nonlinear stiffness can be obtained by properly choosing the fixture curve according to the design requirements. Three available generator types are presented and discussed for polynomial curves. Experimental investigations show that the proposed mechanism effectively extends the operation bandwidth with good power performance. Especially, the simplicity and easy feasibility allow the mechanism to be widely applied for vibration generators in different scales and environments.

  18. Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.

  19. Assessment of opacimeter calibration according to International Standard Organization 10155.

    PubMed

    Gomes, J F

    2001-01-01

    This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.

  20. Improving Risk Adjustment for Mortality After Pediatric Cardiac Surgery: The UK PRAiS2 Model.

    PubMed

    Rogers, Libby; Brown, Katherine L; Franklin, Rodney C; Ambler, Gareth; Anderson, David; Barron, David J; Crowe, Sonya; English, Kate; Stickley, John; Tibby, Shane; Tsang, Victor; Utley, Martin; Witter, Thomas; Pagel, Christina

    2017-07-01

    Partial Risk Adjustment in Surgery (PRAiS), a risk model for 30-day mortality after children's heart surgery, has been used by the UK National Congenital Heart Disease Audit to report expected risk-adjusted survival since 2013. This study aimed to improve the model by incorporating additional comorbidity and diagnostic information. The model development dataset was all procedures performed between 2009 and 2014 in all UK and Ireland congenital cardiac centers. The outcome measure was death within each 30-day surgical episode. Model development followed an iterative process of clinical discussion and development and assessment of models using logistic regression under 25 × 5 cross-validation. Performance was measured using Akaike information criterion, the area under the receiver-operating characteristic curve (AUC), and calibration. The final model was assessed in an external 2014 to 2015 validation dataset. The development dataset comprised 21,838 30-day surgical episodes, with 539 deaths (mortality, 2.5%). The validation dataset comprised 4,207 episodes, with 97 deaths (mortality, 2.3%). The updated risk model included 15 procedural, 11 diagnostic, and 4 comorbidity groupings, and nonlinear functions of age and weight. Performance under cross-validation was: median AUC of 0.83 (range, 0.82 to 0.83), median calibration slope and intercept of 0.92 (range, 0.64 to 1.25) and -0.23 (range, -1.08 to 0.85) respectively. In the validation dataset, the AUC was 0.86 (95% confidence interval [CI], 0.82 to 0.89), and the calibration slope and intercept were 1.01 (95% CI, 0.83 to 1.18) and 0.11 (95% CI, -0.45 to 0.67), respectively, showing excellent performance. A more sophisticated PRAiS2 risk model for UK use was developed with additional comorbidity and diagnostic information, alongside age and weight as nonlinear variables. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values.

    PubMed

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-05

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD=0.12], 0.67-23.19 [LOD=0.13] and 0.73-25.12 [LOD=0.15] μgmL(-1) for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values

    NASA Astrophysics Data System (ADS)

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-01

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.

  3. Nonlinear propagation model for ultrasound hydrophones calibration in the frequency range up to 100 MHz.

    PubMed

    Radulescu, E G; Wójcik, J; Lewin, P A; Nowicki, A

    2003-06-01

    To facilitate the implementation and verification of the new ultrasound hydrophone calibration techniques described in the companion paper (somewhere in this issue) a nonlinear propagation model was developed. A brief outline of the theoretical considerations is presented and the model's advantages and disadvantages are discussed. The results of simulations yielding spatial and temporal acoustic pressure amplitude are also presented and compared with those obtained using KZK and Field II models. Excellent agreement between all models is evidenced. The applicability of the model in discrete wideband calibration of hydrophones is documented in the companion paper somewhere in this volume.

  4. LDV measurement of small nonlinearities in flat and curved membranes. A model for eardrum nonlinear acoustic behaviour

    NASA Astrophysics Data System (ADS)

    Kilian, Gladiné; Pieter, Muyshondt; Joris, Dirckx

    2016-06-01

    Laser Doppler Vibrometry is an intrinsic highly linear measurement technique which makes it a great tool to measure extremely small nonlinearities in the vibration response of a system. Although the measurement technique is highly linear, other components in the experimental setup may introduce nonlinearities. An important source of artificially introduced nonlinearities is the speaker, which generates the stimulus. In this work, two correction methods to remove the effects of stimulus nonlinearity are investigated. Both correction methods were found to give similar results but have different pros and cons. The aim of this work is to investigate the importance of the conical shape of the eardrum as a source of nonlinearity in hearing. We present measurements on flat and indented membranes. The data shows that the curved membrane exhibit slightly higher levels of nonlinearity compared to the flat membrane.

  5. 40 CFR 90.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... the form of the following equation (1) or (2). Include zero as a data point. Compensation for known...

  6. Nonlinear Constitutive Modeling of Piezoelectric Ceramics

    NASA Astrophysics Data System (ADS)

    Xu, Jia; Li, Chao; Wang, Haibo; Zhu, Zhiwen

    2017-12-01

    Nonlinear constitutive modeling of piezoelectric ceramics is discussed in this paper. Van der Pol item is introduced to explain the simple hysteretic curve. Improved nonlinear difference items are used to interpret the hysteresis phenomena of piezoelectric ceramics. The fitting effect of the model on experimental data is proved by the partial least-square regression method. The results show that this method can describe the real curve well. The results of this paper are helpful to piezoelectric ceramics constitutive modeling.

  7. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors

    PubMed Central

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-01-01

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015

  8. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.

    PubMed

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-12-28

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.

  9. The strong nonlinear interaction of Tollmien-Schlichting waves and Taylor-Goertler vortices in curved channel flow

    NASA Technical Reports Server (NTRS)

    Bennett, J.; Hall, P.; Smith, F. T.

    1988-01-01

    Viscous fluid flows with curved streamlines can support both centrifugal and viscous traveling wave instabilities. Here the interaction of these instabilities in the context of the fully developed flow in a curved channel is discussed. The viscous (Tollmein-Schlichting) instability is described asymptotically at high Reynolds numbers and it is found that it can induce a Taylor-Goertler flow even at extremely small amplitudes. In this interaction, the Tollmein-Schlichting wave can drive a vortex state with wavelength either comparable with the channel width or the wavelength of lower branch viscous modes. The nonlinear equations which describe these interactions are solved for nonlinear equilibrium states.

  10. Quantitative evaluation method for nonlinear characteristics of piezoelectric transducers under high stress with complex nonlinear elastic constant

    NASA Astrophysics Data System (ADS)

    Miyake, Susumu; Kasashima, Takashi; Yamazaki, Masato; Okimura, Yasuyuki; Nagata, Hajime; Hosaka, Hiroshi; Morita, Takeshi

    2018-07-01

    The high power properties of piezoelectric transducers were evaluated considering a complex nonlinear elastic constant. The piezoelectric LCR equivalent circuit with nonlinear circuit parameters was utilized to measure them. The deformed admittance curve of piezoelectric transducers was measured under a high stress and the complex nonlinear elastic constant was calculated by curve fitting. Transducers with various piezoelectric materials, Pb(Zr,Ti)O3, (K,Na)NbO3, and Ba(Zr,Ti)O3–(Ba,Ca)TiO3, were investigated by the proposed method. The measured complex nonlinear elastic constant strongly depends on the linear elastic and piezoelectric constants. This relationship indicates that piezoelectric high power properties can be controlled by modifying the linear elastic and piezoelectric constants.

  11. Test of prototype ITER vacuum ultraviolet spectrometer and its application to impurity study in KSTAR plasmas.

    PubMed

    Seon, C R; Hong, J H; Jang, J; Lee, S H; Choe, W; Lee, H H; Cheon, M S; Pak, S; Lee, H G; Biel, W; Barnsley, R

    2014-11-01

    To optimize the design of ITER vacuum ultraviolet (VUV) spectrometer, a prototype VUV spectrometer was developed. The sensitivity calibration curve of the spectrometer was calculated from the mirror reflectivity, the grating efficiency, and the detector efficiency. The calibration curve was consistent with the calibration points derived in the experiment using the calibrated hollow cathode lamp. For the application of the prototype ITER VUV spectrometer, the prototype spectrometer was installed at KSTAR, and various impurity emission lines could be measured. By analyzing about 100 shots, strong positive correlation between the O VI and the C IV emission intensities could be found.

  12. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  13. On the minimum quantum requirement of photosynthesis.

    PubMed

    Zeinalov, Yuzeir

    2009-01-01

    An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.

  14. Linking Item Parameters to a Base Scale. ACT Research Report Series, 2009-2

    ERIC Educational Resources Information Center

    Kang, Taehoon; Petersen, Nancy S.

    2009-01-01

    This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord (1983) characteristic curve method…

  15. Nonlinear Gompertz Curve Models of Achievement Gaps in Mathematics and Reading

    ERIC Educational Resources Information Center

    Cameron, Claire E.; Grimm, Kevin J.; Steele, Joel S.; Castro-Schilo, Laura; Grissmer, David W.

    2015-01-01

    This study examined achievement trajectories in mathematics and reading from school entry through the end of middle school with linear and nonlinear growth curves in 2 large longitudinal data sets (National Longitudinal Study of Youth--Children and Young Adults and Early Childhood Longitudinal Study--Kindergarten Cohort [ECLS-K]). The S-shaped…

  16. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  17. A Nonlinearity Minimization-Oriented Resource-Saving Time-to-Digital Converter Implemented in a 28 nm Xilinx FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2015-10-01

    Because large nonlinearity errors exist in the current tapped-delay line (TDL) style field programmable gate array (FPGA)-based time-to-digital converters (TDC), bin-by-bin calibration techniques have to be resorted for gaining a high measurement resolution. If the TDL in selected FPGAs is significantly affected by changes in ambient temperature, the bin-by-bin calibration table has to be updated as frequently as possible. The on-line calibration and calibration table updating increase the TDC design complexity and limit the system performance to some extent. This paper proposes a method to minimize the nonlinearity errors of TDC bins, so that the bin-by-bin calibration may not be needed while maintaining a reasonably high time resolution. The method is a two pass approach: By a bin realignment, the large number of wasted zero-width bins in the original TDL is reused and the granularity of the bins is improved; by a bin decimation, the bin size and its uniformity is traded-off, and the time interpolation by the delay line turns more precise so that the bin-by-bin calibration is not necessary. Using Xilinx 28 nm FPGAs, in which the TDL property is not very sensitive to ambient temperature, the proposed TDC achieves approximately 15 ps root-mean-square (RMS) time resolution by dual-channel measurements of time-intervals over the range of operating temperature. Because of removing the calibration and less logic resources required for the data post-processing, the method has bigger multi-channel capability.

  18. Efficient gradient calibration based on diffusion MRI.

    PubMed

    Teh, Irvin; Maguire, Mahon L; Schneider, Jürgen E

    2017-01-01

    To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. The errors in apparent diffusion coefficients along orthogonal axes ranged from -9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and -0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from -5.5% to + 4.5% precalibration and were likewise reduced to -0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170-179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. © 2016 Wiley Periodicals, Inc.

  19. Efficient gradient calibration based on diffusion MRI

    PubMed Central

    Teh, Irvin; Maguire, Mahon L.

    2016-01-01

    Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277

  20. Molar mass characterization of sodium carboxymethyl cellulose by SEC-MALLS.

    PubMed

    Shakun, Maryia; Maier, Helena; Heinze, Thomas; Kilz, Peter; Radke, Wolfgang

    2013-06-05

    Two series of sodium carboxymethyl celluloses (NaCMCs) derived from microcrystalline cellulose (Avicel samples) and cotton linters (BWL samples) with average degrees of substitution (DS) ranging from DS=0.45 to DS=1.55 were characterized by size exclusion chromatography with multi-angle laser light scattering detection (SEC-MALLS) in 100 mmol/L aqueous ammonium acetate (NH4OAc) as vaporizable eluent system. The application of vaporizable NH4OAc allows future use of the eluent system in two-dimensional separations employing evaporative light scattering detection (ELSD). The losses of samples during filtration and during the chromatographic experiment were determined. The scaling exponent as of the relation [Formula: see text] was approx. 0.61, showing that NaCMCs exhibit an expanded coil conformation in solution. No systematic dependencies of as on DS were observed. The dependences of molar mass on SEC-elution volume for samples of different DS can be well described by a common calibration curve, which is of advantage, as it allows the determination of molar masses of unknown samples by using the same calibration curve, irrespective of the DS of the NaCMC sample. Since no commercial NaCMC standards are available, correction factors were determined allowing converting a pullulan based calibration curve into a NaCMC calibration using the broad calibration approach. The weight average molar masses derived using the so established calibration curve closely agree with the ones determined by light scattering, proving the accuracy of the correction factors determined. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Determining the Parameters of Fractional Exponential Hereditary Kernels for Nonlinear Viscoelastic Materials

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Pavlyuk, Ya. V.; Fernati, P. V.

    2013-03-01

    The parameters of fractional-exponential hereditary kernels for nonlinear viscoelastic materials are determined. Methods for determining the parameters used in the third-order theory of viscoelasticity and in nonlinear theories based on the similarity of primary creep curves and the similarity of isochronous creep curves are analyzed. The parameters of fractional-exponential hereditary kernels are determined and tested against experimental data for microplastic, TC-8/3-250 glass-reinforced plastics, SVAM glass-reinforced plastics. The results (tables and plots) are analyzed

  2. A rapid tool for determination of titanium dioxide content in white chickpea samples.

    PubMed

    Sezer, Banu; Bilge, Gonca; Berkkan, Aysel; Tamer, Ugur; Hakki Boyaci, Ismail

    2018-02-01

    Titanium dioxide (TiO 2 ) is a widely used additive in foods. However, in the scientific community there is an ongoing debate on health concerns about TiO 2 . The main goal of this study is to determine TiO 2 content by using laser induced breakdown spectroscopy (LIBS). To this end, different amounts of TiO 2 was added to white chickpeas and analyzed by using LIBS. Calibration curve was obtained by following Ti emissions at 390.11nm for univariate calibration, and partial least square (PLS) calibration curve was obtained by evaluating the whole spectra. The results showed that Ti calibration curve at 390.11nm provides successful determination of Ti level with 0.985 of R 2 and 33.9ppm of limit of detection (LOD) value, while PLS has 0.989 of R 2 and 60.9ppm of LOD. Furthermore, commercial white chickpea samples were used to validate the method, and validation R 2 for simple calibration and PLS were calculated as 0.989 and 0.951, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torello, David; Kim, Jin-Yeon; Qu, Jianmin

    2015-03-31

    This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less

  4. Myoglobin structure and function: A multiweek biochemistry laboratory project.

    PubMed

    Silverstein, Todd P; Kirk, Sarah R; Meyer, Scott C; Holman, Karen L McFarlane

    2015-01-01

    We have developed a multiweek laboratory project in which students isolate myoglobin and characterize its structure, function, and redox state. The important laboratory techniques covered in this project include size-exclusion chromatography, electrophoresis, spectrophotometric titration, and FTIR spectroscopy. Regarding protein structure, students work with computer modeling and visualization of myoglobin and its homologues, after which they spectroscopically characterize its thermal denaturation. Students also study protein function (ligand binding equilibrium) and are instructed on topics in data analysis (calibration curves, nonlinear vs. linear regression). This upper division biochemistry laboratory project is a challenging and rewarding one that not only exposes students to a wide variety of important biochemical laboratory techniques but also ties those techniques together to work with a single readily available and easily characterized protein, myoglobin. © 2015 International Union of Biochemistry and Molecular Biology.

  5. The CHARIS Integral Field Spectrograph with SCExAO: Data Reduction and Performance

    NASA Astrophysics Data System (ADS)

    Kasdin, N. Jeremy; Groff, Tyler; Brandt, Timothy; Currie, Thayne; Rizzo, Maxime; Chilcote, Jeffrey K.; Guyon, Olivier; Jovanovic, Nemanja; Lozi, Julien; Norris, Barnaby; Tamura, Motohide

    2018-01-01

    We summarize the data reduction pipeline and on-sky performance of the CHARIS Integral Field Spectrograph behind the SCExAO Adaptive Optics system on the Subaru Telescope. The open-source pipeline produces data cubes from raw detector reads using a Χ^2-based spectral extraction technique. It implements a number of advances, including a fit to the full nonlinear pixel response, suppression of up to a factor of ~2 in read noise, and deconvolution of the spectra with the line-spread function. The CHARIS team is currently developing the calibration and postprocessing software that will comprise the second component of the data reduction pipeline. Here, we show a range of CHARIS images, spectra, and contrast curves produced using provisional routines. CHARIS is now characterizing exoplanets simultaneously across the J, H, and K bands.

  6. Quantification of Sunscreen Ethylhexyl Triazone in Topical Skin-Care Products by Normal-Phase TLC/Densitometry

    PubMed Central

    Sobanska, Anna W.; Pyzowski, Jaroslaw

    2012-01-01

    Ethylhexyl triazone (ET) was separated from other sunscreens such as avobenzone, octocrylene, octyl methoxycinnamate, and diethylamino hydroxybenzoyl hexyl benzoate and from parabens by normal-phase HPTLC on silica gel 60 as stationary phase. Two mobile phases were particularly effective: (A) cyclohexane-diethyl ether 1 : 1 (v/v) and (B) cyclohexane-diethyl ether-acetone 15 : 1 : 2 (v/v/v) since apart from ET analysis they facilitated separation and quantification of other sunscreens present in the formulations. Densitometric scanning was performed at 300 nm. Calibration curves for ET were nonlinear (second-degree polynomials), with R > 0.998. For both mobile phases limits of detection (LOD) were 0.03 and limits of quantification (LOQ) 0.1 μg spot−1. Both methods were validated. PMID:22629203

  7. On the role of flood wave celerity-discharge relationship and its applications on hydrological studies

    NASA Astrophysics Data System (ADS)

    Fleischmann, Ayan; Collischonn, Walter; Jardim, Pedro; Meyer, Aline; Paiva, Rodrigo

    2017-04-01

    The non-linear relationship between flood wave celerity (C) and discharge (Q) plays an important role on defining how flood waves are routed through the river network. The behavior of this curve is driven by cross section geometry, which leads to increasing celerity with discharge in rivers without floodplains. In reaches with floodplain storage, C may decrease after bankfull Q. Thus, in a set of studies we investigate the effects of C x Q relationships on the basin hydrological response. (i) We studied these curves for several Brazilian river reaches, and analyzed to which extent they are related to river channel geometry and other characteristics (e.g., slope, width, drainage area and sinuosity). (ii) It is shown through empirical, analytical and numerical experiments how C x Q relation affects hydrograph skewness, and how the decreasing relationship existent in rivers with important floodplain storage leads to negatively skewed hydrographs, such as in the Amazon and Pantanal regions, which could be used to infer important floodplain processes (e.g., presence of overbank flow wetlands, which feature negatively skewed hydrographs or interfluvial wetlands not directly connected to rivers). (iii) Finally, we found that it is possible to use these concepts to calibrate the effective bathymetry of a hydrodynamic model by fitting the C x Q relationship using SCE-UA optimization method. Our results show how important it is to investigate the non-linear hydraulic processes occurring throughout river basins to understand the overall hydrological response, and propose new frameworks to assist such studies, including the evaluation of hydrograph skewness and estimation of hydraulic geometry.

  8. Validation of a C2-C7 cervical spine finite element model using specimen-specific flexibility data.

    PubMed

    Kallemeyn, Nicole; Gandhi, Anup; Kode, Swathi; Shivanna, Kiran; Smucker, Joseph; Grosland, Nicole

    2010-06-01

    This study presents a specimen-specific C2-C7 cervical spine finite element model that was developed using multiblock meshing techniques. The model was validated using in-house experimental flexibility data obtained from the cadaveric specimen used for mesh development. The C2-C7 specimen was subjected to pure continuous moments up to +/-1.0 N m in flexion, extension, lateral bending, and axial rotation, and the motions at each level were obtained. Additionally, the specimen was divided into C2-C3, C4-C5, and C6-C7 functional spinal units (FSUs) which were tested in the intact state as well as after sequential removal of the interspinous, ligamentum flavum, and capsular ligaments. The finite element model was initially assigned baseline material properties based on the literature, but was calibrated using the experimental motion data which was obtained in-house, while utlizing the ranges of material property values as reported in the literature. The calibrated model provided good agreement with the nonlinear experimental loading curves, and can be used to further study the response of the cervical spine to various biomechanical investigations. Copyright 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Detecting time-specific differences between temporal nonlinear curves: Analyzing data from the visual world paradigm

    PubMed Central

    Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant

    2015-01-01

    In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088

  10. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.

  11. Pulsed lasers versus continuous light sources in capillary electrophoresis and fluorescence detection studies: Photodegradation pathways and models.

    PubMed

    Boutonnet, Audrey; Morin, Arnaud; Petit, Pierre; Vicendo, Patricia; Poinsot, Véréna; Couderc, François

    2016-03-17

    Pulsed lasers are widely used in capillary electrophoresis (CE) studies to provide laser induced fluorescence (LIF) detection. Unfortunately pulsed lasers do not give linear calibration curves over a wide range of concentrations. While this does not prevent their use in CE/LIF studies, the non-linear behavior must be understood. Using 7-hydroxycoumarin (7-HC) (10-5000 nM), Tamra (10-5000 nM) and tryptophan (1-200 μM) as dyes, we observe that continuous lasers and LEDs result in linear calibration curves, while pulsed lasers give polynomial ones. The effect is seen with both visible light (530 nm) and with UV light (355 nm, 266 nm). In this work we point out the formation of byproducts induced by pulsed laser upon irradiation of 7-HC. Their separation by CE using two Zeta LIF detectors clearly shows that this process is related to the first laser detection. All of these photodegradation products can be identified by an ESI-/MS investigation and correspond to at least two 7HC dimers. By using the photodegradation model proposed by Heywood and Farnsworth (2010) and by taking into account the 7-HC results and the fact that in our system we do not have a constant concentration of fluorophore, it is possible to propose a new photochemical model of fluorescence in LIF detection. The model, like the experiment, shows that it is difficult to obtain linear quantitation curves with pulsed lasers while UV-LEDs used in continuous mode have this advantage. They are a good alternative to UV pulsed lasers. An application involving the separation and linear quantification of oligosaccharides labeled with 2-aminobezoic acid is presented using HILIC and LED (365 nm) induced fluorescence. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Neural network modeling for surgical decisions on traumatic brain injury patients.

    PubMed

    Li, Y C; Liu, L; Chiu, W T; Jian, W S

    2000-01-01

    Computerized medical decision support systems have been a major research topic in recent years. Intelligent computer programs were implemented to aid physicians and other medical professionals in making difficult medical decisions. This report compares three different mathematical models for building a traumatic brain injury (TBI) medical decision support system (MDSS). These models were developed based on a large TBI patient database. This MDSS accepts a set of patient data such as the types of skull fracture, Glasgow Coma Scale (GCS), episode of convulsion and return the chance that a neurosurgeon would recommend an open-skull surgery for this patient. The three mathematical models described in this report including a logistic regression model, a multi-layer perceptron (MLP) neural network and a radial-basis-function (RBF) neural network. From the 12,640 patients selected from the database. A randomly drawn 9480 cases were used as the training group to develop/train our models. The other 3160 cases were in the validation group which we used to evaluate the performance of these models. We used sensitivity, specificity, areas under receiver-operating characteristics (ROC) curve and calibration curves as the indicator of how accurate these models are in predicting a neurosurgeon's decision on open-skull surgery. The results showed that, assuming equal importance of sensitivity and specificity, the logistic regression model had a (sensitivity, specificity) of (73%, 68%), compared to (80%, 80%) from the RBF model and (88%, 80%) from the MLP model. The resultant areas under ROC curve for logistic regression, RBF and MLP neural networks are 0.761, 0.880 and 0.897, respectively (P < 0.05). Among these models, the logistic regression has noticeably poorer calibration. This study demonstrated the feasibility of applying neural networks as the mechanism for TBI decision support systems based on clinical databases. The results also suggest that neural networks may be a better solution for complex, non-linear medical decision support systems than conventional statistical techniques such as logistic regression.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Z; Reyhan, M; Huang, Q

    Purpose: The calibration of the Hounsfield units (HU) to relative proton stopping powers (RSP) is a crucial component in assuring the accurate delivery of proton therapy dose distributions to patients. The purpose of this work is to assess the uncertainty of CT calibration considering the impact of CT slice thickness, position of the plug within the phantom and phantom sizes. Methods: Stoichiometric calibration method was employed to develop the CT calibration curve. Gammex 467 tissue characterization phantom was scanned in Tomotherapy Cheese phantom and Gammex 451 phantom by using a GE CT scanner. Each plug was individually inserted into themore » same position of inner and outer ring of phantoms at each time, respectively. 1.25 mm and 2.5 mm slice thickness were used. Other parameters were same. Results: HU of selected human tissues were calculated based on fitted coefficient (Kph, Kcoh and KKN), and RSP were calculated according to the Bethe-Bloch equation. The calibration curve was obtained by fitting cheese phantom data with 1.25 mm thickness. There is no significant difference if the slice thickness, phantom size, position of plug changed in soft tissue. For boney structure, RSP increases up to 1% if the phantom size and the position of plug changed but keep the slice thickness the same. However, if the slice thickness varied from the one in the calibration curve, 0.5%–3% deviation would be expected depending on the plug position. The Inner position shows the obvious deviation (averagely about 2.5%). Conclusion: RSP shows a clinical insignificant deviation in soft tissue region. Special attention may be required when using a different slice thickness from the calibration curve for boney structure. It is clinically practical to address 3% deviation due to different thickness in the definition of clinical margins.« less

  14. Nonlinear dynamical modes of climate variability: from curves to manifolds

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander

    2016-04-01

    The necessity of efficient dimensionality reduction methods capturing dynamical properties of the system from observed data is evident. Recent study shows that nonlinear dynamical mode (NDM) expansion is able to solve this problem and provide adequate phase variables in climate data analysis [1]. A single NDM is logical extension of linear spatio-temporal structure (like empirical orthogonal function pattern): it is constructed as nonlinear transformation of hidden scalar time series to the space of observed variables, i. e. projection of observed dataset onto a nonlinear curve. Both the hidden time series and the parameters of the curve are learned simultaneously using Bayesian approach. The only prior information about the hidden signal is the assumption of its smoothness. The optimal nonlinearity degree and smoothness are found using Bayesian evidence technique. In this work we do further extension and look for vector hidden signals instead of scalar with the same smoothness restriction. As a result we resolve multidimensional manifolds instead of sum of curves. The dimension of the hidden manifold is optimized using also Bayesian evidence. The efficiency of the extension is demonstrated on model examples. Results of application to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510

  15. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    PubMed

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.

  16. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  17. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  18. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  19. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  20. Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System.

    PubMed

    Zhang, Jiarui; Zhang, Yingjie; Chen, Bo

    2017-12-20

    The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.

  1. Calibration of streamflow gauging stations at the Tenderfoot Creek Experimental Forest

    Treesearch

    Scott W. Woods

    2007-01-01

    We used tracer based methods to calibrate eleven streamflow gauging stations at the Tenderfoot Creek Experimental Forest in western Montana. At six of the stations the measured flows were consistent with the existing rating curves. At Lower and Upper Stringer Creek, Upper Sun Creek and Upper Tenderfoot Creek the published flows, based on the existing rating curves,...

  2. Calibration of a stochastic health evolution model using NHIS data

    NASA Astrophysics Data System (ADS)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  3. Rock magnetic properties of dusty olivine: comparison and calibration of non-heating paleointensity methods

    NASA Astrophysics Data System (ADS)

    Lappe, S. L.; Harrison, R. J.; Feinberg, J. M.

    2012-12-01

    The mechanism of chondrule formation is an important outstanding question in cosmochemistry. Magnetic signals recorded by Fe-Ni nanoparticles in chondrules could carry clues to their origin. Recently, research in this area has focused on 'dusty olivine' in ordinary chondrites as potential carriers of pre-accretionary remanence. Dusty olivine is characterised by the presence of sub-micron Fe-Ni inclusions within the olivine host. These metal particles form via subsolidus reduction of the olivine during chondrule formation and are thought to be protected from subsequent chemical and thermal alteration by the host olivine. Three sets of synthetic dusty olivines have been produced, using natural olivine (average Ni-content of 0.3 wt%), synthetic Ni-containing olivine (0.1wt% Ni) and synthetic Ni-free olivine as starting materials. The starting materials were ground to powders, packed into a 8-27 mm3 graphite crucible, heated up to 1350°C under a pure CO gas flow and kept at this temperature for 10 minutes. After this the samples were held in fixed orientation and quenched into water in a range of known magnetic fields from 0.2 mT to 1.5 mT. We present a comparison of all non-heating methods commonly used for paleointensity determination of extraterrestrial material. All samples showed uni-directional, single-component demagnetization behaviour. Saturation REM ratio (NRM/SIRM) and REMc ratio show non-linear behaviour as function of applied field and a saturation value < 1. Using the REM' method the samples showed approximately constant REM' between 100 and 150 mT AF-field. Plotting the average values for this field range again shows non-linear behaviour and a saturation value < 1. Another approach we examined to obtain calibration curves for paleointensity determination is based on ARM measurents. We also present an analysis of a new FORC-based method of paleointensity determination applied to metallic Fe-bearing samples [1, 2]. The method uses a first-order reversal curve (FORC) diagram to generate a Preisach distribution of coercivities and interaction fields within the sample and then physically models the acquisition of TRM as function of magnetic field, temperature and time using thermal relaxation theory. The comparison of observed and calculated NRM demagnetisation spectra is adversely effected by a large population of particles in the single-vortex state. Comparison of observed and calculated REM' curves, however, yields much closer agreement in the high-coercivity SD-dominated range. Calculated values of the average REM' ratio show excellent agreement with the experimental values - including the observed non-linearity of the remanence acquisition curve - suggesting that this method has the potential to reduce the uncertainties in non-heating paleointensity methods for extraterrestrial samples. [1] AR Muxworthy and D Heslop(2011) A Preisach method for estimating absolute paleofield intensity under the constraint of using only isothermal measurements: 1. Theoretical framework. Journal of Geophysical Research, 116, B04102, doi:10.1029/2010JB007843. [2] AR Muxworthy, D Heslop, GA Paterson, and D Michalk. A Preisach method for estimating absolute paleofield intensity under the constraint of using only isothermal measurements: 2. Experimental testing. Journal of Geophysical Research, 116, B04103, doi:10.1029/2010JB007844.

  4. Self-calibrating multiplexer circuit

    DOEpatents

    Wahl, Chris P.

    1997-01-01

    A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.

  5. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  6. ESR/Alanine gamma-dosimetry in the 10-30 Gy range.

    PubMed

    Fainstein, C; Winkler, E; Saravi, M

    2000-05-01

    We report Alanine Dosimeter preparation, procedures for using the ESR/Dosimetry method, and the resulting calibration curve for gamma-irradiation in the range from 10-30 Gy. We use calibration curve to measure the irradiation dose in gamma-irradiation of human blood, as required in Blood Transfusion Therapy. The ESR/Alanine results are compared against those obtained using the thermoluminescent dosimetry (TLD) method.

  7. Techniques for precise energy calibration of particle pixel detectors

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Campbell-Ricketts, T.; Bahadori, A.; Empl, A.

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  8. Techniques for precise energy calibration of particle pixel detectors.

    PubMed

    Kroupa, M; Campbell-Ricketts, T; Bahadori, A; Empl, A

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  9. Development and verification of an innovative photomultiplier calibration system with a 10-fold increase in photometer resolution

    NASA Astrophysics Data System (ADS)

    Jiang, Shyh-Biau; Yeh, Tse-Liang; Chen, Li-Wu; Liu, Jann-Yenq; Yu, Ming-Hsuan; Huang, Yu-Qin; Chiang, Chen-Kiang; Chou, Chung-Jen

    2018-05-01

    In this study, we construct a photomultiplier calibration system. This calibration system can help scientists measuring and establishing the characteristic curve of the photon count versus light intensity. The system uses an innovative 10-fold optical attenuator to enable an optical power meter to calibrate photomultiplier tubes which have the resolution being much greater than that of the optical power meter. A simulation is firstly conducted to validate the feasibility of the system, and then the system construction, including optical design, circuit design, and software algorithm, is realized. The simulation generally agrees with measurement data of the constructed system, which are further used to establish the characteristic curve of the photon count versus light intensity.

  10. [Application of AOTF in spectral analysis. 2. Application of self-constructed visible AOTF spectrophotometer].

    PubMed

    Peng, Rong-fei; He, Jia-yao; Zhang, Zhan-xia

    2002-02-01

    The performances of a self-constructed visible AOTF spectrophotometer are presented. The wavelength calibration of AOTF1 and AOTF2 are performed with a didymium glass using a fourth-order polynomial curve fitting method. The absolute error of the peak position is usually less than 0.7 nm. Compared with the commercial UV1100 spectrophotometer, the scanning speed of the AOTF spectrophotometer is much more faster, but the resolution depends on the quality of AOTF. The absorption spectra and the calibration curves of copper sulfate and alizarin red obtained with AOTF1(Institute for Silicate, Shanghai China) and AOTF2 (Brimrose U.S.A) respectively are presented. Their corresponding correlation coefficients of the calibration curves are 0.9991 and 0.9990 respectively. Preliminary results show that the self-constructed AOTF spectrophotometer is feasible.

  11. Geometrodynamics: the nonlinear dynamics of curved spacetime

    NASA Astrophysics Data System (ADS)

    Scheel, M. A.; Thorne, K. S.

    2014-04-01

    We review discoveries in the nonlinear dynamics of curved spacetime, largely made possible by numerical solutions of Einstein's equations. We discuss critical phenomena and self-similarity in gravitational collapse, the behavior of spacetime curvature near singularities, the instability of black strings in five spacetime dimensions, and the collision of four-dimensional black holes. We also discuss the prospects for further discoveries in geometrodynamics via observations of gravitational waves.

  12. A monolithic 3D-0D coupled closed-loop model of the heart and the vascular system: Experiment-based parameter estimation for patient-specific cardiac mechanics.

    PubMed

    Hirschvogel, Marc; Bassilious, Marina; Jagschies, Lasse; Wildhirt, Stephen M; Gee, Michael W

    2016-10-15

    A model for patient-specific cardiac mechanics simulation is introduced, incorporating a 3-dimensional finite element model of the ventricular part of the heart, which is coupled to a reduced-order 0-dimensional closed-loop vascular system, heart valve, and atrial chamber model. The ventricles are modeled by a nonlinear orthotropic passive material law. The electrical activation is mimicked by a prescribed parameterized active stress acting along a generic muscle fiber orientation. Our activation function is constructed such that the start of ventricular contraction and relaxation as well as the active stress curve's slope are parameterized. The imaging-based patient-specific ventricular model is prestressed to low end-diastolic pressure to account for the imaged, stressed configuration. Visco-elastic Robin boundary conditions are applied to the heart base and the epicardium to account for the embedding surrounding. We treat the 3D solid-0D fluid interaction as a strongly coupled monolithic problem, which is consistently linearized with respect to 3D solid and 0D fluid model variables to allow for a Newton-type solution procedure. The resulting coupled linear system of equations is solved iteratively in every Newton step using 2  ×  2 physics-based block preconditioning. Furthermore, we present novel efficient strategies for calibrating active contractile and vascular resistance parameters to experimental left ventricular pressure and stroke volume data gained in porcine experiments. Two exemplary states of cardiovascular condition are considered, namely, after application of vasodilatory beta blockers (BETA) and after injection of vasoconstrictive phenylephrine (PHEN). The parameter calibration to the specific individual and cardiovascular state at hand is performed using a 2-stage nonlinear multilevel method that uses a low-fidelity heart model to compute a parameter correction for the high-fidelity model optimization problem. We discuss 2 different low-fidelity model choices with respect to their ability to augment the parameter optimization. Because the periodic state conditions on the model (active stress, vascular pressures, and fluxes) are a priori unknown and also dependent on the parameters to be calibrated (and vice versa), we perform parameter calibration and periodic state condition estimation simultaneously. After a couple of heart beats, the calibration algorithm converges to a settled, periodic state because of conservation of blood volume within the closed-loop circulatory system. The proposed model and multilevel calibration method are cost-efficient and allow for an efficient determination of a patient-specific in silico heart model that reproduces physiological observations very well. Such an individual and state accurate model is an important predictive tool in intervention planning, assist device engineering and other medical applications. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Material Model Evaluation of a Composite Honeycomb Energy Absorber

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.

    2012-01-01

    A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.

  14. Biomechanics of North Atlantic Right Whale Bone: Mandibular Fracture as a Fatal Endpoint for Blunt Vessel-Whale Collision Modeling

    DTIC Science & Technology

    2007-09-01

    Calibration curves for CT number ( Hounsfield unit )s vs. mineral density (g /c c...12 3 Figure 3.4. Calibration curves for CT number ( Hounsfield units ) vs. apparent density (g /c c...named Hounsfield units (HU) after Sir Godfrey Hounsfield . The CT number is K([i- iw]/pw), where K = a magnifying constant, which depends on the make of CT

  15. Application of standard addition for the determination of carboxypeptidase activity in Actinomucor elegans bran koji.

    PubMed

    Fu, J; Li, L; Yang, X Q; Zhu, M J

    2011-01-01

    Leucine carboxypeptidase (EC 3.4.16) activity in Actinomucor elegans bran koji was investigated via absorbance at 507 nm after stained by Cd-nihydrin solution, with calibration curve A, which was made by a set of known concentration standard leucine, calibration B, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations inactive crude enzyme extract, and calibration C, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations crude enzyme extract. The results indicated that application of pure amino acid standard curve was not a suitable way to determine carboxypeptidase in complicate mixture, and it probably led to overestimated carboxypeptidase activity. It was found that addition of crude exact into pure amino acid standard curve had a significant difference from pure amino acid standard curve method (p < 0.05). There was no significant enzyme activity difference (p > 0.05) between addition of active crude exact and addition of inactive crude kind, when the proper dilute multiple was used. It was concluded that the addition of crude enzyme extract to the calibration was needed to eliminate the interference of free amino acids and related compounds presented in crude enzyme extract.

  16. State-variable analysis of non-linear circuits with a desk computer

    NASA Technical Reports Server (NTRS)

    Cohen, E.

    1981-01-01

    State variable analysis was used to analyze the transient performance of non-linear circuits on a desk top computer. The non-linearities considered were not restricted to any circuit element. All that is required for analysis is the relationship defining each non-linearity be known in terms of points on a curve.

  17. Artificial Neural Network and application in calibration transfer of AOTF-based NIR spectrometer

    NASA Astrophysics Data System (ADS)

    Wang, Wenbo; Jiang, Chengzhi; Xu, Kexin; Wang, Bin

    2002-09-01

    Chemometrics is widely applied to develop models for quantitative prediction of unknown samples in Near-infrared (NIR) spectroscopy. However, calibrated models generally fail when new instruments are introduced or replacement of the instrument parts occurs. Therefore, calibration transfer becomes necessary to avoid the costly, time-consuming recalibration of models. Piecewise Direct Standardization (PDS) has been proven to be a reference method for standardization. In this paper, Artificial Neural Network (ANN) is employed as an alternative to transfer spectra between instruments. Two Acousto-optic Tunable Filter NIR spectrometers are employed in the experiment. Spectra of glucose solution are collected on the spectrometers through transflectance mode. A Back propagation Network with two layers is employed to simulate the function between instruments piecewisely. Standardization subset is selected by Kennard and Stone (K-S) algorithm in the first two score space of Principal Component Analysis (PCA) of spectra matrix. In current experiment, it is noted that obvious nonlinearity exists between instruments and attempts are made to correct such nonlinear effect. Prediction results before and after successful calibration transfer are compared. Successful transfer can be achieved by adapting window size and training parameters. Final results reveal that ANN is effective in correcting the nonlinear instrumental difference and a only 1.5~2 times larger prediction error is expected after successful transfer.

  18. A robust in-situ warp-correction algorithm for VISAR streak camera data at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-02-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  19. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  20. The estimation of branching curves in the presence of subject-specific random effects.

    PubMed

    Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng

    2014-12-20

    Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.

  1. Analysis of PVC plasticizers in medical devices and infused solutions by GC-MS.

    PubMed

    Bourdeaux, Daniel; Yessaad, Mouloud; Chennell, Philip; Larbre, Virginie; Eljezi, Teuta; Bernard, Lise; Sautou, Valerie

    2016-01-25

    In 2008, di-(2-ethylhexyl) phthalate (DEHP), was categorized as CMR 1B under the CLP regulations and its use in PVC medical devices (MD) was called into question by the European authorities. This resulted in the commercialization of PVC MDs plasticized with the DEHP alternative plasticizers tri-octyl trimellitate (TOTM), di-(2-ethylhexyl) terephthalate (DEHT), di-isononyl cyclohexane-1,2-dicarboxylate (DINCH), di-isononyl phthalate (DINP), di-(2-ethylhexy) adipate (DEHA), and Acetyl tri-n-butyl citrate (ATBC). The data available on the migration of these plasticizers from the MDs are too limited to ensure their safe use. We therefore developed a versatile GC-MS method to identify and quantify both these newly used plasticizers and DEHP in MDs and to assess their migration abilities in simulant solution. The use of cubic calibration curves and the optimization of the analytical method by an experimental plan allowed us to lower the limit of plasticizer quantification. It also allowed wide calibration curves to be established that were adapted to this quantification in MDs during migration tests, irrespective of the amount present, and while maintaining good precision and accuracy. We then tested the developed method on 32 PVC MDs used in our hospital and evaluated the plasticizer release from a PVC MD into a simulant solution during a 24h migration test. The results showed a predominance of TOTM in PVC MDs accompanied by DEHP (<0.1% w/w), DEHT, and sometimes DEHA. The migration tests showed a difference in the migration ability between the plasticizers and a non-linear kinetic release. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. INFLUENCE OF MATERIAL MODELS ON PREDICTING THE FIRE BEHAVIOR OF STEEL COLUMNS.

    PubMed

    Choe, Lisa; Zhang, Chao; Luecke, William E; Gross, John L; Varma, Amit H

    2017-01-01

    Finite-element (FE) analysis was used to compare the high-temperature responses of steel columns with two different stress-strain models: the Eurocode 3 model and the model proposed by National Institute of Standards and Technology (NIST). The comparisons were made in three different phases. The first phase compared the critical buckling temperatures predicted using forty seven column data from five different laboratories. The slenderness ratios varied from 34 to 137, and the applied axial load was 20-60 % of the room-temperature capacity. The results showed that the NIST model predicted the buckling temperature as or more accurately than the Eurocode 3 model for four of the five data sets. In the second phase, thirty unique FE models were developed to analyze the W8×35 and W14×53 column specimens with the slenderness ratio about 70. The column specimens were tested under steady-heating conditions with a target temperature in the range of 300-600 °C. The models were developed by combining the material model, temperature distributions in the specimens, and numerical scheme for non-linear analyses. Overall, the models with the NIST material properties and the measured temperature variations showed the results comparable to the test data. The deviations in the results from two different numerical approaches (modified Newton Raphson vs. arc-length) were negligible. The Eurocode 3 model made conservative predictions on the behavior of the column specimens since its retained elastic moduli are smaller than those of the NIST model at elevated temperatures. In the third phase, the column curves calibrated using the NIST model was compared with those prescribed in the ANSI/AISC-360 Appendix 4. The calibrated curve significantly deviated from the current design equation with increasing temperature, especially for the slenderness ratio from 50 to 100.

  3. REVISITING EVIDENCE OF CHAOS IN X-RAY LIGHT CURVES: THE CASE OF GRS 1915+105

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mannattil, Manu; Gupta, Himanshu; Chakraborty, Sagar, E-mail: mmanu@iitk.ac.in, E-mail: hiugupta@iitk.ac.in, E-mail: sagarc@iitk.ac.in

    2016-12-20

    Nonlinear time series analysis has been widely used to search for signatures of low-dimensional chaos in light curves emanating from astrophysical bodies. A particularly popular example is the microquasar GRS 1915+105, whose irregular but systematic X-ray variability has been well studied using data acquired by the Rossi X-ray Timing Explorer . With a view to building simpler models of X-ray variability, attempts have been made to classify the light curves of GRS 1915+105 as chaotic or stochastic. Contrary to some of the earlier suggestions, after careful analysis, we find no evidence for chaos or determinism in any of the GRS 1915+105 classes. Themore » dearth of long and stationary data sets representing all the different variability classes of GRS 1915+105 makes it a poor candidate for analysis using nonlinear time series techniques. We conclude that either very exhaustive data analysis with sufficiently long and stationary light curves should be performed, keeping all the pitfalls of nonlinear time series analysis in mind, or alternative schemes of classifying the light curves should be adopted. The generic limitations of the techniques that we point out in the context of GRS 1915+105 affect all similar investigations of light curves from other astrophysical sources.« less

  4. Nonlinear Growth Curves in Developmental Research

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki

    2011-01-01

    Developmentalists are often interested in understanding change processes and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and asymptotic levels can be estimated. A variety of growth models are described beginning with the linear growth model and moving to nonlinear models of varying complexity. A detailed discussion of nonlinear models is provided, highlighting the added insights into complex developmental processes associated with their use. A collection of growth models are fit to repeated measures of height from participants of the Berkeley Growth and Guidance Studies from early childhood through adulthood. PMID:21824131

  5. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  6. Calibrations between the variables of microbial TTI response and ground pork qualities.

    PubMed

    Kim, Eunji; Choi, Dong Yeol; Kim, Hyun Chul; Kim, Keehyuk; Lee, Seung Ju

    2013-10-01

    A time-temperature indicator (TTI) based on a lactic acid bacterium, Weissella cibaria CIFP009, was applied to ground pork packaging. Calibration curves between TTI response and pork qualities were obtained from storage tests at 2°C, 10°C, and 13°C. The curves of the TTI vs. total cell number at different temperatures coincided to the greatest extent, indicating the highest representativeness of calibration, by showing the least coefficient of variance (CV=11%) of the quality variables at a given TTI response (titratable acidity) on the curves, followed by pH (23%), volatile basic nitrogen (VBN) (25%), and thiobarbituric acid-reactive substances (TBARS) (47%). Similarity of Arrhenius activation energy (Ea) could also reflect the representativeness of calibration. The total cell number (104.9 kJ/mol) was found to be the most similar to that of the TTI response (106.2 kJ/mol), followed by pH (113.6 kJ/mol), VBN (77.4 kJ/mol), and TBARS (55.0 kJ/mol). Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Flight calibration tests of a nose-boom-mounted fixed hemispherical flow-direction sensor

    NASA Technical Reports Server (NTRS)

    Armistead, K. H.; Webb, L. D.

    1973-01-01

    Flight calibrations of a fixed hemispherical flow angle-of-attack and angle-of-sideslip sensor were made from Mach numbers of 0.5 to 1.8. Maneuvers were performed by an F-104 airplane at selected altitudes to compare the measurement of flow angle of attack from the fixed hemispherical sensor with that from a standard angle-of-attack vane. The hemispherical flow-direction sensor measured differential pressure at two angle-of-attack ports and two angle-of-sideslip ports in diametrically opposed positions. Stagnation pressure was measured at a center port. The results of these tests showed that the calibration curves for the hemispherical flow-direction sensor were linear for angles of attack up to 13 deg. The overall uncertainty in determining angle of attack from these curves was plus or minus 0.35 deg or less. A Mach number position error calibration curve was also obtained for the hemispherical flow-direction sensor. The hemispherical flow-direction sensor exhibited a much larger position error than a standard uncompensated pitot-static probe.

  8. Preliminary calibration of the ACP safeguards neutron counter

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.

    2007-10-01

    The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.

  9. Excitation power quantities in phase resonance testing of nonlinear systems with phase-locked-loop excitation

    NASA Astrophysics Data System (ADS)

    Peter, Simon; Leine, Remco I.

    2017-11-01

    Phase resonance testing is one method for the experimental extraction of nonlinear normal modes. This paper proposes a novel method for nonlinear phase resonance testing. Firstly, the issue of appropriate excitation is approached on the basis of excitation power considerations. Therefore, power quantities known from nonlinear systems theory in electrical engineering are transferred to nonlinear structural dynamics applications. A new power-based nonlinear mode indicator function is derived, which is generally applicable, reliable and easy to implement in experiments. Secondly, the tuning of the excitation phase is automated by the use of a Phase-Locked-Loop controller. This method provides a very user-friendly and fast way for obtaining the backbone curve. Furthermore, the method allows to exploit specific advantages of phase control such as the robustness for lightly damped systems and the stabilization of unstable branches of the frequency response. The reduced tuning time for the excitation makes the commonly used free-decay measurements for the extraction of backbone curves unnecessary. Instead, steady-state measurements for every point of the curve are obtained. In conjunction with the new mode indicator function, the correlation of every measured point with the associated nonlinear normal mode of the underlying conservative system can be evaluated. Moreover, it is shown that the analysis of the excitation power helps to locate sources of inaccuracies in the force appropriation process. The method is illustrated by a numerical example and its functionality in experiments is demonstrated on a benchmark beam structure.

  10. Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation

    NASA Astrophysics Data System (ADS)

    Bueno, Diana R.; Montano, L.

    2017-04-01

    Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.

  11. Exploring Alternative Characteristic Curve Approaches to Linking Parameter Estimates from the Generalized Partial Credit Model.

    ERIC Educational Resources Information Center

    Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill

    Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…

  12. A dose-response curve for biodosimetry from a 6 MV electron linear accelerator

    PubMed Central

    Lemos-Pinto, M.M.P.; Cadena, M.; Santos, N.; Fernandes, T.S.; Borges, E.; Amaral, A.

    2015-01-01

    Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates. PMID:26445334

  13. Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.

    2018-01-01

    This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.

  14. Nonlinear Acoustical Assessment of Precipitate Nucleation

    NASA Technical Reports Server (NTRS)

    Cantrell, John H.; Yost, William T.

    2004-01-01

    The purpose of the present work is to show that measurements of the acoustic nonlinearity parameter in heat treatable alloys as a function of heat treatment time can provide quantitative information about the kinetics of precipitate nucleation and growth in such alloys. Generally, information on the kinetics of phase transformations is obtained from time-sequenced electron microscopical examination and differential scanning microcalorimetry. The present nonlinear acoustical assessment of precipitation kinetics is based on the development of a multiparameter analytical model of the effects on the nonlinearity parameter of precipitate nucleation and growth in the alloy system. A nonlinear curve fit of the model equation to the experimental data is then used to extract the kinetic parameters related to the nucleation and growth of the targeted precipitate. The analytical model and curve fit is applied to the assessment of S' precipitation in aluminum alloy 2024 during artificial aging from the T4 to the T6 temper.

  15. Calibration improvements to electronically scanned pressure systems and preliminary statistical assessment

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.

    1996-01-01

    Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.

  16. Fast and robust curve skeletonization for real-world elongated objects

    USDA-ARS?s Scientific Manuscript database

    These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...

  17. Practical calibration curve of small-type optically stimulated luminescence (OSL) dosimeter for evaluation of entrance skin dose in the diagnostic X-ray region.

    PubMed

    Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Kobayashi, Ikuo

    2015-07-01

    For X-ray diagnosis, the proper management of the entrance skin dose (ESD) is important. Recently, a small-type optically stimulated luminescence dosimeter (nanoDot OSL dosimeter) was made commercially available by Landauer, and it is hoped that it will be used for ESD measurements in clinical settings. Our objectives in the present study were to propose a method for calibrating the ESD measured with the nanoDot OSL dosimeter and to evaluate its accuracy. The reference ESD is assumed to be based on an air kerma with consideration of a well-known back scatter factor. We examined the characteristics of the nanoDot OSL dosimeter using two experimental conditions: a free air irradiation to derive the air kerma, and a phantom experiment to determine the ESD. For evaluation of the ability to measure the ESD, a calibration curve for the nanoDot OSL dosimeter was determined in which the air kerma and/or the ESD measured with an ionization chamber were used as references. As a result, we found that the calibration curve for the air kerma was determined with an accuracy of 5 %. Furthermore, the calibration curve was applied to the ESD estimation. The accuracy of the ESD obtained was estimated to be 15 %. The origin of these uncertainties was examined based on published papers and Monte-Carlo simulation. Most of the uncertainties were caused by the systematic uncertainty of the reading system and the differences in efficiency corresponding to different X-ray energies.

  18. Nonlinear gamma correction via normed bicoherence minimization in optical fringe projection metrology

    NASA Astrophysics Data System (ADS)

    Kamagara, Abel; Wang, Xiangzhao; Li, Sikun

    2018-03-01

    We propose a method to compensate for the projector intensity nonlinearity induced by gamma effect in three-dimensional (3-D) fringe projection metrology by extending high-order spectra analysis and bispectral norm minimization to digital sinusoidal fringe pattern analysis. The bispectrum estimate allows extraction of vital signal information features such as spectral component correlation relationships in fringe pattern images. Our approach exploits the fact that gamma introduces high-order harmonic correlations in the affected fringe pattern image. Estimation and compensation of projector nonlinearity is realized by detecting and minimizing the normed bispectral coherence of these correlations. The proposed technique does not require calibration information and technical knowledge or specification of fringe projection unit. This is promising for developing a modular and calibration-invariant model for intensity nonlinear gamma compensation in digital fringe pattern projection profilometry. Experimental and numerical simulation results demonstrate this method to be efficient and effective in improving the phase measuring accuracies with phase-shifting fringe pattern projection profilometry.

  19. Nonlinear model identification and spectral submanifolds for multi-degree-of-freedom mechanical vibrations

    NASA Astrophysics Data System (ADS)

    Szalai, Robert; Ehrhardt, David; Haller, George

    2017-06-01

    In a nonlinear oscillatory system, spectral submanifolds (SSMs) are the smoothest invariant manifolds tangent to linear modal subspaces of an equilibrium. Amplitude-frequency plots of the dynamics on SSMs provide the classic backbone curves sought in experimental nonlinear model identification. We develop here, a methodology to compute analytically both the shape of SSMs and their corresponding backbone curves from a data-assimilating model fitted to experimental vibration signals. This model identification utilizes Taken's delay-embedding theorem, as well as a least square fit to the Taylor expansion of the sampling map associated with that embedding. The SSMs are then constructed for the sampling map using the parametrization method for invariant manifolds, which assumes that the manifold is an embedding of, rather than a graph over, a spectral subspace. Using examples of both synthetic and real experimental data, we demonstrate that this approach reproduces backbone curves with high accuracy.

  20. Recovery from nonlinear creep provides a window into physics of polymer glasses

    NASA Astrophysics Data System (ADS)

    Caruthers, James; Medvedev, Grigori

    Creep under constant applied stress is one of the most basic mechanical experiments, where it exhibits extremely rich relaxation behavior for polymer glasses. As many as five distinct stages of nonlinear creep are observed, where the rate of creep dramatically slows down, accelerates and then slows down again. Modeling efforts to-date has primarily focused on predicting the intricacies of the nonlinear creep curve. We argue that as much attention should be paid to the creep recovery response, when the stress is removed. The experimental creep recovery curve is smooth, where the rate of recovery is initially quite rapid and then progressively decreases. In contrast, the majority of the traditional constitutive models predict recovery curves that are much too abrupt. A recently developed stochastic constitutive model that takes into account the dynamic heterogeneity of glasses produces a smooth creep recovery response that is consistent with experiment.

  1. Temperature dependence of nonlinear optical properties in Li doped nano-carbon bowl material

    NASA Astrophysics Data System (ADS)

    Li, Wei-qi; Zhou, Xin; Chang, Ying; Quan Tian, Wei; Sun, Xiu-Dong

    2013-04-01

    The mechanism for change of nonlinear optical (NLO) properties with temperature is proposed for a nonlinear optical material, Li doped curved nano-carbon bowl. Four stable conformations of Li doped corannulene were located and their electronic properties were investigated in detail. The NLO response of those Li doped conformations varies with relative position of doping agent on the curved carbon surface of corannulene. Conversion among those Li doped conformations, which could be controlled by temperature, changes the NLO response of bulk material. Thus, conformation change of alkali metal doped carbon nano-material with temperature rationalizes the variation of NLO properties of those materials.

  2. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    DOE PAGES

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...

    2018-02-13

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less

  3. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.

    2018-03-01

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.

  4. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less

  5. Validation of an assay for quantification of free normetanephrine, metanephrine and methoxytyramine in plasma by high performance liquid chromatography with coulometric detection: Comparison of peak-area vs. peak-height measurements.

    PubMed

    Nieć, Dawid; Kunicki, Paweł K

    2015-10-01

    Measurements of plasma concentrations of free normetanephrine (NMN), metanephrine (MN) and methoxytyramine (MTY) constitute the most diagnostically accurate screening test for pheochromocytomas and paragangliomas. The aim of this article is to present the results from a validation of an analytical method utilizing high performance liquid chromatography with coulometric detection (HPLC-CD) for quantifying plasma free NMN, MN and MTY. Additionally, peak integration by height and area and the use of one calibration curve for all batches or individual calibration curve for each batch of samples was explored as to determine the optimal approach with regard to accuracy and precision. The method was validated using charcoal stripped plasma spiked with solutions of NMN, MN, MTY and internal standard (4-hydroxy-3-methoxybenzylamine) with the exception of selectivity which was evaluated by analysis of real plasma samples. Calibration curve performance, accuracy, precision and recovery were determined following both peak-area and peak-height measurements and the obtained results were compared. The most accurate and precise method of calibration was evaluated by analyzing quality control samples at three concentration levels in 30 analytical runs. The detector response was linear over the entire tested concentration range from 10 to 2000pg/mL with R(2)≥0.9988. The LLOQ was 10pg/mL for each analyte of interest. To improve accuracy for measurements at low concentrations, a weighted (1/amount) linear regression model was employed, which resulted in inaccuracies of -2.48 to 9.78% and 0.22 to 7.81% following peak-area and peak-height integration, respectively. The imprecisions ranged from 1.07 to 15.45% and from 0.70 to 11.65% for peak-area and peak-height measurements, respectively. The optimal approach to calibration was the one utilizing an individual calibration curve for each batch of samples and peak-height measurements. It was characterized by inaccuracies ranging from -3.39 to +3.27% and imprecisions from 2.17 to 13.57%. The established HPLC-CD method enables accurate and precise measurements of plasma free NMN, MN and MTY with reasonable selectivity. Preparing calibration curve based on peak-height measurements for each batch of samples yields optimal accuracy and precision. Copyright © 2015. Published by Elsevier B.V.

  6. Auto calibration of a cone-beam-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross, Daniel; Heil, Ulrich; Schulze, Ralf

    2012-10-15

    Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferablymore » form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT device to demonstrate the achievable spatial resolution of their calibration procedure. Results: Compared to the results published in the most closely related work [K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)], the simulation proved the greater accuracy of their method, as well as a lower standard deviation of roughly 1 order of magnitude. When compared to another similar approach [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004)], their results were roughly of the same order of accuracy. Their analysis revealed that the method is capable of sufficiently calibrating out-of-plane angles in cases of larger cone angles when neglecting these angles negatively affects the reconstruction. Fine details in the 3D reconstruction of the spine segment and an electronic device indicate a high geometric calibration accuracy and the capability to produce state-of-the-art reconstructions. Conclusions: The method introduced here makes no requirements on the accuracy of the test object. In contrast to many previous autocalibration methods their approach also includes out-of-plane rotations of the detector. Although assuming a perfect rotation, the method seems to be sufficiently accurate for a commercial CBCT scanner. For devices which require higher dimensional geometry models, the method could be used as a initial calibration procedure.« less

  7. Can hydraulic-modelled rating curves reduce uncertainty in high flow data?

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Lam, Norris; Lyon, Steve W.

    2017-04-01

    Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show the potential of the hydraulically-modelled curves, particularly where the calibration gaugings are of high quality and cover a wide range of flow conditions.

  8. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However,more » the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.« less

  9. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  10. Stability of Gradient Field Corrections for Quantitative Diffusion MRI.

    PubMed

    Rogers, Baxter P; Blaber, Justin; Welch, E Brian; Ding, Zhaohua; Anderson, Adam W; Landman, Bennett A

    2017-02-11

    In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fields, we predicted the obtained b-values and applied gradient directions throughout a typical field of view for brain imaging for a typical 32-direction diffusion imaging sequence. We measured the stability of these predictions over time. At 80 mm from scanner isocenter, predicted b-value was 1-6% different than intended due to gradient nonlinearity, and predicted gradient directions were in error by up to 1 degree. Over the course of one month the change in these quantities due to calibration-related factors such as scanner drift and variation in phantom placement was <0.5% for b-values, and <0.5 degrees for angular deviation. The proposed calibration procedure allows the estimation of gradient nonlinearity to correct b-values and gradient directions ahead of advanced diffusion image processing for high angular resolution data, and requires only a five-minute phantom scan that can be included in a weekly or monthly quality assurance protocol.

  11. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  12. Design and calibration of a scanning tunneling microscope for large machined surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigg, D.A.; Russell, P.E.; Dow, T.A.

    During the last year the large sample STM has been designed, built and used for the observation of several different samples. Calibration of the scanner for prope dimensional interpretation of surface features has been a chief concern, as well as corrections for non-linear effects such as hysteresis during scans. Several procedures used in calibration and correction of piezoelectric scanners used in the laboratorys STMs are described.

  13. Classical black holes: the nonlinear dynamics of curved spacetime.

    PubMed

    Thorne, Kip S

    2012-08-03

    Numerical simulations have revealed two types of physical structures, made from curved spacetime, that are attached to black holes: tendexes, which stretch or squeeze anything they encounter, and vortexes, which twist adjacent inertial frames relative to each other. When black holes collide, their tendexes and vortexes interact and oscillate (a form of nonlinear dynamics of curved spacetime). These oscillations generate gravitational waves, which can give kicks up to 4000 kilometers per second to the merged black hole. The gravitational waves encode details of the spacetime dynamics and will soon be observed and studied by the Laser Interferometer Gravitational Wave Observatory and its international partners.

  14. Classical Black Holes: The Nonlinear Dynamics of Curved Spacetime

    NASA Astrophysics Data System (ADS)

    Thorne, Kip S.

    2012-08-01

    Numerical simulations have revealed two types of physical structures, made from curved spacetime, that are attached to black holes: tendexes, which stretch or squeeze anything they encounter, and vortexes, which twist adjacent inertial frames relative to each other. When black holes collide, their tendexes and vortexes interact and oscillate (a form of nonlinear dynamics of curved spacetime). These oscillations generate gravitational waves, which can give kicks up to 4000 kilometers per second to the merged black hole. The gravitational waves encode details of the spacetime dynamics and will soon be observed and studied by the Laser Interferometer Gravitational Wave Observatory and its international partners.

  15. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  16. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  17. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  18. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  19. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  20. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  1. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  2. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  3. Reflection and Transmission of a Focused Finite Amplitude Sound Beam Incident on a Curved Interface

    NASA Astrophysics Data System (ADS)

    Makin, Inder Raj Singh

    Reflection and transmission of a finite amplitude focused sound beam at a weakly curved interface separating two fluid-like media are investigated. The KZK parabolic wave equation, which accounts for thermoviscous absorption, diffraction, and nonlinearity, is used to describe the high intensity focused beam. The first part of the work deals with the quasilinear analysis of a weakly nonlinear beam after its reflection and transmission from a curved interface. A Green's function approach is used to define the field integrals describing the primary and the nonlinearly generated second harmonic beam. Closed-form solutions are obtained for the primary and second harmonic beams when a Gaussian amplitude distribution at the source is assumed. The second part of the research uses a numerical frequency domain solution of the KZK equation for a fully nonlinear analysis of the reflected and transmitted fields. Both piston and Gaussian sources are considered. Harmonic components generated in the medium due to propagation of the focused beam are evaluated, and formation of shocks in the reflected and transmitted beams is investigated. A finite amplitude focused beam is observed to be modified due to reflection and transmission from a curved interface in a manner distinct from that in the case of a small signal beam. Propagation curves, beam patterns, phase plots and time waveforms for various parameters defining the source and media pairs are presented, highlighting the effect of the interface curvature on the reflected and transmitted beams. Relevance of the current work to biomedical applications of ultrasound is discussed.

  4. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  5. Diameter Growth Models for Inventory Applications

    Treesearch

    Ronald E. McRoberts; Christopher W. Woodall; Veronica C. Lessard; Margaret R. Holdaway

    2002-01-01

    Distant-independent, individual-tree, diametar growth models were constructed to update information for forest inventory plots measured in previous years. The models are nonlinear in the parameters and were calibrated weighted nonlinear least squares techniques and forest inventory plot data. Analyses of residuals indicated that model predictions compare favorably to...

  6. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Tests)

    NASA Technical Reports Server (NTRS)

    Pastor-Barsi, Christine; Allen, Arrington E.

    2013-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel (IRT) was completed in 2012 following the major modifications to the facility that included replacement of the refrigeration plant and heat exchanger. The calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT.

  7. Precise positioning method for multi-process connecting based on binocular vision

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan

    2016-01-01

    With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.

  8. Calibration of amino acid racemization (AAR) kinetics in United States mid-Atlantic Coastal Plain Quaternary mollusks using 87Sr/ 86Sr analyses: Evaluation of kinetic models and estimation of regional Late Pleistocene temperature history

    USGS Publications Warehouse

    Wehmiller, J.F.; Harris, W.B.; Boutin, B.S.; Farrell, K.M.

    2012-01-01

    The use of amino acid racemization (AAR) for estimating ages of Quaternary fossils usually requires a combination of kinetic and effective temperature modeling or independent age calibration of analyzed samples. Because of limited availability of calibration samples, age estimates are often based on model extrapolations from single calibration points over wide ranges of D/L values. Here we present paired AAR and 87Sr/ 86Sr results for Pleistocene mollusks from the North Carolina Coastal Plain, USA. 87Sr/ 86Sr age estimates, derived from the lookup table of McArthur et al. [McArthur, J.M., Howarth, R.J., Bailey, T.R., 2001. Strontium isotopic stratigraphy: LOWESS version 3: best fit to the marine Sr-isotopic curve for 0-509 Ma and accompanying Look-up table for deriving numerical age. Journal of Geology 109, 155-169], provide independent age calibration over the full range of amino acid D/L values, thereby allowing comparisons of alternative kinetic models for seven amino acids. The often-used parabolic kinetic model is found to be insufficient to explain the pattern of racemization, although the kinetic pathways for valine racemization and isoleucine epimerization can be closely approximated with this function. Logarithmic and power law regressions more accurately represent the racemization pathways for all amino acids. The reliability of a non-linear model for leucine racemization, developed and refined over the past 20 years, is confirmed by the 87Sr/ 86Sr age results. This age model indicates that the subsurface record (up to 80m thick) of the North Carolina Coastal Plain spans the entire Quaternary, back to ???2.5Ma. The calibrated kinetics derived from this age model yield an estimate of the effective temperature for the study region of 11??2??C., from which we estimate full glacial (Last Glacial Maximum - LGM) temperatures for the region on the order of 7-10??C cooler than present. These temperatures compare favorably with independent paleoclimate information for the region. ?? 2011 Elsevier B.V.

  9. Water content determination of superdisintegrants by means of ATR-FTIR spectroscopy.

    PubMed

    Szakonyi, G; Zelkó, R

    2012-04-07

    Water contents of superdisintegrant pharmaceutical excipients were determined by attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy using simple linear regression. Water contents of the investigated three common superdisintegrants (crospovidone, croscarmellose sodium, sodium starch glycolate) varied over a wide range (0-24%, w/w). In the case of crospovidone three different samples from two manufacturers were examined in order to study the effects of different grades on the calibration curves. Water content determinations were based on strong absorption of water between 3700 and 2800 cm⁻¹, other spectral changes associated with the different compaction of samples on the ATR crystal using the same pressure were followed by the infrared region between 1510 and 1050 cm⁻¹. The calibration curves were constructed using the ratio of absorbance intensities in the two investigated regions. Using appropriate baseline correction the linearity of the calibration curves was maintained over the entire investigated water content regions and the effect of particle size on the calibration was not significant in the case of crospovidones from the same manufacturer. The described method enables the water content determination of powdered hygroscopic materials containing homogeneously distributed water. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Monitoring of toxic elements present in sludge of industrial waste using CF-LIBS.

    PubMed

    Kumar, Rohit; Rai, Awadhesh K; Alamelu, Devanathan; Aggarwal, Suresh K

    2013-01-01

    Industrial waste is one of the main causes of environmental pollution. Laser-induced breakdown spectroscopy (LIBS) was applied to detect the toxic metals in the sludge of industrial waste water. Sludge on filter paper was obtained after filtering the collected waste water samples from different sections of a water treatment plant situated in an industrial area of Kanpur City. The LIBS spectra of the sludge samples were recorded in the spectral range of 200 to 500 nm by focusing the laser light on sludge. Calibration-free laser-induced breakdown spectroscopy (CF-LIBS) technique was used for the quantitative measurement of toxic elements such as Cr and Pb present in the sample. We also used the traditional calibration curve approach to quantify these elements. The results obtained from CF-LIBS are in good agreement with the results from the calibration curve approach. Thus, our results demonstrate that CF-LIBS is an appropriate technique for quantitative analysis where reference/standard samples are not available to make the calibration curve. The results of the present experiment are alarming to the people living nearby areas of industrial activities, as the concentrations of toxic elements are quite high compared to the admissible limits of these substances.

  11. Effects of Uncertainties in Hydrological Modelling. A Case Study of a Mountainous Catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn; Steinsland, Ingelin

    2016-04-01

    The aim of this study is to investigate how the inclusion of uncertainties in inputs and observed streamflow influence the parameter estimation, streamflow predictions and model evaluation. In particular we wanted to answer the following research questions: • What is the effect of including a random error in the precipitation and temperature inputs? • What is the effect of decreased information about precipitation by excluding the nearest precipitation station? • What is the effect of the uncertainty in streamflow observations? • What is the effect of reduced information about the true streamflow by using a rating curve where the measurement of the highest and lowest streamflow is excluded when estimating the rating curve? To answer these questions, we designed a set of calibration experiments and evaluation strategies. We used the elevation distributed HBV model operating on daily time steps combined with a Bayesian formulation and the MCMC routine Dream for parameter inference. The uncertainties in inputs was represented by creating ensembles of precipitation and temperature. The precipitation ensemble were created using a meta-gaussian random field approach. The temperature ensembles were created using a 3D Bayesian kriging with random sampling of the temperature laps rate. The streamflow ensembles were generated by a Bayesian multi-segment rating curve model. Precipitation and temperatures were randomly sampled for every day, whereas the streamflow ensembles were generated from rating curve ensembles, and the same rating curve was always used for the whole time series in a calibration or evaluation run. We chose a catchment with a meteorological station measuring precipitation and temperature, and a rating curve of relatively high quality. This allowed us to investigate and further test the effect of having less information on precipitation and streamflow during model calibration, predictions and evaluation. The results showed that including uncertainty in the precipitation and temperature input has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Reduced information in precipitation input resulted in a and a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using wrong rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions obtained using a wrong rating curve, the evaluation scores varies depending on the true rating curve. Generally, the best evaluation scores were not achieved for the rating curve used for calibration, but for a rating curves giving low variance in streamflow observations. Reduced information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores giving both better and worse scores. This case study shows that estimating the water balance is challenging since both precipitation inputs and streamflow observations have pronounced systematic component in their uncertainties.

  12. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  13. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  14. An extended CFD model to predict the pumping curve in low pressure plasma etch chamber

    NASA Astrophysics Data System (ADS)

    Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu

    2014-12-01

    Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.

  15. [Investigation of quantitative detection of water quality using spectral fluorescence signature].

    PubMed

    He, Jun-hua; Cheng, Yong-jin; Han, Yan-ling; Zhang, Hao; Yang, Tao

    2008-08-01

    A method of spectral analysis, which can simultaneously detect dissolved organic matter (DOM) and chlorophyll a (Chl-a) in natural water, was developed in the present paper with the intention of monitoring water quality fast and quantitatively. Firstly, the total luminescence spectra (TLS) of water sample from East Lake in Wuhan city were measured by the use of laser (532 nm) induced fluorescence (LIF). There were obvious peaks of relative intensity at the wavelength value of 580, 651 and 687 nm in the TLS of the sample, which correspond respectively to spectra of DOM, and the Raman scattering of water and Chl-a in the water. Then the spectral fluorescence signature (SFS) technique was adopted to analyze and distinguish spectral characteristics of DOM and Chl-a in natural water. The calibration curves and function expressions, which indicate the relation between the normalized fluorescence intensities of DOM and Chl-a in water and their concentrations, were obtained respectively under the condition of low concentration(< 40 mg x L(-1))by using normalization of Raman scattering spectrum of water. The curves have a high linearity. When the concentration of the solution with humic acid is large (> 40 mg x L(-1)), the Raman scattering signal is totally absorbed by the molecules of humic acid being on the ground state, so the normalization technique can not be adopted. However the function expression between the concentration of the solution with humic acid and its relative fluorescence peak intensity can be acquired directly with the aid of experiment of fluorescence spectrum. It is concluded that although the expression is non-linearity as a whole, there is a excellent linear relation between the fluorescence intensity and concentration of DOM when the concentration is less than 200 mg x L(-1). The method of measurement based on spectral fluorescence signature technique and the calibration curves gained will have prospects of broad application. It can recognize fast what pollutants are and detect quantitatively their contents in water. It is realizable to monitor the quality of natural water with real time, dynamics and inlarge area.

  16. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    NASA Technical Reports Server (NTRS)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  17. Correction for isotopic interferences between analyte and internal standard in quantitative mass spectrometry by a nonlinear calibration function.

    PubMed

    Rule, Geoffrey S; Clark, Zlatuse D; Yue, Bingfang; Rockwood, Alan L

    2013-04-16

    Stable isotope-labeled internal standards are of great utility in providing accurate quantitation in mass spectrometry (MS). An implicit assumption has been that there is no "cross talk" between signals of the internal standard and the target analyte. In some cases, however, naturally occurring isotopes of the analyte do contribute to the signal of the internal standard. This phenomenon becomes more pronounced for isotopically rich compounds, such as those containing sulfur, chlorine, or bromine, higher molecular weight compounds, and those at high analyte/internal standard concentration ratio. This can create nonlinear calibration behavior that may bias quantitative results. Here, we propose the use of a nonlinear but more accurate fitting of data for these situations that incorporates one or two constants determined experimentally for each analyte/internal standard combination and an adjustable calibration parameter. This fitting provides more accurate quantitation in MS-based assays where contributions from analyte to stable labeled internal standard signal exist. It can also correct for the reverse situation where an analyte is present in the internal standard as an impurity. The practical utility of this approach is described, and by using experimental data, the approach is compared to alternative fits.

  18. External validation and clinical utility of a prediction model for 6-month mortality in patients undergoing hemodialysis for end-stage kidney disease.

    PubMed

    Forzley, Brian; Er, Lee; Chiu, Helen Hl; Djurdjev, Ognjenka; Martinusen, Dan; Carson, Rachel C; Hargrove, Gaylene; Levin, Adeera; Karim, Mohamud

    2018-02-01

    End-stage kidney disease is associated with poor prognosis. Health care professionals must be prepared to address end-of-life issues and identify those at high risk for dying. A 6-month mortality prediction model for patients on dialysis derived in the United States is used but has not been externally validated. We aimed to assess the external validity and clinical utility in an independent cohort in Canada. We examined the performance of the published 6-month mortality prediction model, using discrimination, calibration, and decision curve analyses. Data were derived from a cohort of 374 prevalent dialysis patients in two regions of British Columbia, Canada, which included serum albumin, age, peripheral vascular disease, dementia, and answers to the "the surprise question" ("Would I be surprised if this patient died within the next year?"). The observed mortality in the validation cohort was 11.5% at 6 months. The prediction model had reasonable discrimination (c-stat = 0.70) but poor calibration (calibration-in-the-large = -0.53 (95% confidence interval: -0.88, -0.18); calibration slope = 0.57 (95% confidence interval: 0.31, 0.83)) in our data. Decision curve analysis showed the model only has added value in guiding clinical decision in a small range of threshold probabilities: 8%-20%. Despite reasonable discrimination, the prediction model has poor calibration in this external study cohort; thus, it may have limited clinical utility in settings outside of where it was derived. Decision curve analysis clarifies limitations in clinical utility not apparent by receiver operating characteristic curve analysis. This study highlights the importance of external validation of prediction models prior to routine use in clinical practice.

  19. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Joshi, H; Saunderson, J R; Beavis, A W

    2016-11-07

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQ m and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  20. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Joshi, H.; Saunderson, J. R.; Beavis, A. W.

    2016-11-01

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQm and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  1. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner.

    PubMed

    McCabe, Bradley P; Speidel, Michael A; Pike, Tina L; Van Lysel, Michael S

    2011-04-01

    In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was +/- 7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.

  2. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  3. X-ray Diffraction Crystal Calibration and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael J. Haugh; Richard Stewart; Nathan Kugland

    2009-06-05

    National Security Technologies’ X-ray Laboratory is comprised of a multi-anode Manson type source and a Henke type source that incorporates a dual goniometer and XYZ translation stage. The first goniometer is used to isolate a particular spectral band. The Manson operates up to 10 kV and the Henke up to 20 kV. The Henke rotation stages and translation stages are automated. Procedures have been developed to characterize and calibrate various NIF diagnostics and their components. The diagnostics include X-ray cameras, gated imagers, streak cameras, and other X-ray imaging systems. Components that have been analyzed include filters, filter arrays, grazing incidencemore » mirrors, and various crystals, both flat and curved. Recent efforts on the Henke system are aimed at characterizing and calibrating imaging crystals and curved crystals used as the major component of an X-ray spectrometer. The presentation will concentrate on these results. The work has been done at energies ranging from 3 keV to 16 keV. The major goal was to evaluate the performance quality of the crystal for its intended application. For the imaging crystals we measured the laser beam reflection offset from the X-ray beam and the reflectivity curves. For the curved spectrometer crystal, which was a natural crystal, resolving power was critical. It was first necessary to find sources of crystals that had sufficiently narrow reflectivity curves. It was then necessary to determine which crystals retained their resolving power after being thinned and glued to a curved substrate.« less

  4. Bilinear modelling of cellulosic orthotropic nonlinear materials

    Treesearch

    E.P. Saliklis; T. J. Urbanik; B. Tokyay

    2003-01-01

    The proposed method of modelling orthotropic solids that have a nonlinear constitutive material relationship affords several advantages. The first advantage is the application of a simple bilinear stress-strain curve to represent the material response on two orthogonal axes as well as in shear, even for markedly nonlinear materials. The second advantage is that this...

  5. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-02-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  6. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-06-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  7. Nuclear Gauge Calibration and Testing Guidelines for Hawaii

    DOT National Transportation Integrated Search

    2006-12-15

    Project proposal brief: AASHTO and ASTM nuclear gauge testing procedures can lead to misleading density and moisture readings for certain Hawaiian soils. Calibration curves need to be established for these unique materials, along with clear standard ...

  8. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  9. Nonlinear acoustic techniques for landmine detection.

    PubMed

    Korman, Murray S; Sabatier, James M

    2004-12-01

    Measurements of the top surface vibration of a buried (inert) VS 2.2 anti-tank plastic landmine reveal significant resonances in the frequency range between 80 and 650 Hz. Resonances from measurements of the normal component of the acoustically induced soil surface particle velocity (due to sufficient acoustic-to-seismic coupling) have been used in detection schemes. Since the interface between the top plate and the soil responds nonlinearly to pressure fluctuations, characteristics of landmines, the soil, and the interface are rich in nonlinear physics and allow for a method of buried landmine detection not previously exploited. Tuning curve experiments (revealing "softening" and a back-bone curve linear in particle velocity amplitude versus frequency) help characterize the nonlinear resonant behavior of the soil-landmine oscillator. The results appear to exhibit the characteristics of nonlinear mesoscopic elastic behavior, which is explored. When two primary waves f1 and f2 drive the soil over the mine near resonance, a rich spectrum of nonlinearly generated tones is measured with a geophone on the surface over the buried landmine in agreement with Donskoy [SPIE Proc. 3392, 221-217 (1998); 3710, 239-246 (1999)]. In profiling, particular nonlinear tonals can improve the contrast ratio compared to using either primary tone in the spectrum.

  10. Identification of nonlinear modes using phase-locked-loop experimental continuation and normal form

    NASA Astrophysics Data System (ADS)

    Denis, V.; Jossic, M.; Giraud-Audine, C.; Chomette, B.; Renault, A.; Thomas, O.

    2018-06-01

    In this article, we address the model identification of nonlinear vibratory systems, with a specific focus on systems modeled with distributed nonlinearities, such as geometrically nonlinear mechanical structures. The proposed strategy theoretically relies on the concept of nonlinear modes of the underlying conservative unforced system and the use of normal forms. Within this framework, it is shown that without internal resonance, a valid reduced order model for a nonlinear mode is a single Duffing oscillator. We then propose an efficient experimental strategy to measure the backbone curve of a particular nonlinear mode and we use it to identify the free parameters of the reduced order model. The experimental part relies on a Phase-Locked Loop (PLL) and enables a robust and automatic measurement of backbone curves as well as forced responses. It is theoretically and experimentally shown that the PLL is able to stabilize the unstable part of Duffing-like frequency responses, thus enabling its robust experimental measurement. Finally, the whole procedure is tested on three experimental systems: a circular plate, a chinese gong and a piezoelectric cantilever beam. It enable to validate the procedure by comparison to available theoretical models as well as to other experimental identification methods.

  11. Hydrological modelling of the Mara River Basin, Kenya: Dealing with uncertain data quality and calibrating using river stage

    NASA Astrophysics Data System (ADS)

    Hulsman, P.; Bogaard, T.; Savenije, H. H. G.

    2016-12-01

    In hydrology and water resources management, discharge is the main time series for model calibration. Rating curves are needed to derive discharge from continuously measured water levels. However, assuring their quality is demanding due to dynamic changes and problems in accurately deriving discharge at high flows. This is valid everywhere, but even more in African socio-economic context. To cope with these uncertainties, this study proposes to use water levels instead of discharge data for calibration. Also uncertainties in rainfall measurements, especially the spatial heterogeneity needs to be considered. In this study, the semi-distributed rainfall runoff model FLEX-Topo was applied to the Mara River Basin. In this model seven sub-basins were distinguished and four hydrological response units with each a unique model structure based on the expected dominant flow processes. Parameter and process constrains were applied to exclude unrealistic results. To calibrate the model, the water levels were back-calculated from modelled discharges, using cross-section data and the Strickler formula calibrating parameter `k•s1/2', and compared to measured water levels. The model simulated the water depths well for the entire basin and the Nyangores sub-basin in the north. However, the calibrated and observed rating curves differed significantly at the basin outlet, probably due to uncertainties in the measured discharge, but at Nyangores they were almost identical. To assess the effect of rainfall uncertainties on the hydrological model, the representative rainfall in each sub-basin was estimated with three different methods: 1) single station, 2) average precipitation, 3) areal sub-division using Thiessen polygons. All three methods gave on average similar results, but method 1 resulted in more flashy responses, method 2 dampened the water levels due to averaging the rainfall and method 3 was a combination of both. In conclusion, in the case of unreliable rating curves, water level data can be used instead and a new rating curve can be calibrated. The effect of rainfall uncertainties on the hydrological model was insignificant.

  12. Definition of energy-calibrated spectra for national reachback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunz, Christopher L.; Hertz, Kristin L.

    2014-01-01

    Accurate energy calibration is critical for the timeliness and accuracy of analysis results of spectra submitted to National Reachback, particularly for the detection of threat items. Many spectra submitted for analysis include either a calibration spectrum using 137Cs or no calibration spectrum at all. The single line provided by 137Cs is insufficient to adequately calibrate nonlinear spectra. A calibration source that provides several lines that are well-spaced, from the low energy cutoff to the full energy range of the detector, is needed for a satisfactory energy calibration. This paper defines the requirements of an energy calibration for the purposes ofmore » National Reachback, outlines a method to validate whether a given spectrum meets that definition, discusses general source considerations, and provides a specific operating procedure for calibrating the GR-135.« less

  13. Toward a Micro-Scale Acoustic Direction-Finding Sensor with Integrated Electronic Readout

    DTIC Science & Technology

    2013-06-01

    measurements with curve fits . . . . . . . . . . . . . . . 20 Figure 2.10 Failure testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22...2.1 Sensor parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Table 2.2 Curve fit parameters...elastic, the quantity of interest is the elastic stiffness. In a typical nanoindentation test, the loading curve is nonlinear due to combined plastic

  14. Time-Dependent Behavior of Diabase and a Nonlinear Creep Model

    NASA Astrophysics Data System (ADS)

    Yang, Wendong; Zhang, Qiangyong; Li, Shucai; Wang, Shugang

    2014-07-01

    Triaxial creep tests were performed on diabase specimens from the dam foundation of the Dagangshan hydropower station, and the typical characteristics of creep curves were analyzed. Based on the test results under different stress levels, a new nonlinear visco-elasto-plastic creep model with creep threshold and long-term strength was proposed by connecting an instantaneous elastic Hooke body, a visco-elasto-plastic Schiffman body, and a nonlinear visco-plastic body in series mode. By introducing the nonlinear visco-plastic component, this creep model can describe the typical creep behavior, which includes the primary creep stage, the secondary creep stage, and the tertiary creep stage. Three-dimensional creep equations under constant stress conditions were deduced. The yield approach index (YAI) was used as the criterion for the piecewise creep function to resolve the difficulty in determining the creep threshold value and the long-term strength. The expression of the visco-plastic component was derived in detail and the three-dimensional central difference form was given. An example was used to verify the credibility of the model. The creep parameters were identified, and the calculated curves were in good agreement with the experimental curves, indicating that the model is capable of replicating the physical processes.

  15. The intrinsic mechanical nonlinearity 3Q0(ω) of linear homopolymer melts

    NASA Astrophysics Data System (ADS)

    Cziep, Miriam Angela; Abbasi, Mahdi; Wilhelm, Manfred

    2017-05-01

    Medium amplitude oscillatory shear (MAOS) in combination with Fourier Transformation of the mechanical stress signal (FT rheology) was utilized to investigate the influence of molecular weight, molecular weight distribution and the monomer on the intrinsic nonlinearity 3Q0(ω). Nonlinear master curves of 3Q0(ω) have been created, applying the time-temperature superposition (TTS) principle. These master curves showed a characteristic shape with an increasing slope at small frequencies, a maximum 3Q0,max and a decreasing slope at high frequencies. 3Q0(De) master curves of monodisperse polymers were evaluated and quantified with the help of a semi-empiric equation, derived from predictions from the pom-pom and molecular stress function (MSF) models. This resulted in a monomer independent description of the nonlinear mechanical behavior of linear, monodisperse homopolymer melts, where 3Q0(ω,Z) is only a function of the frequency ω and the number of entanglements Z. For polydisperse samples, 3Q0(ω) showed a high sensitivity within the experimental window towards an increasing PDI. At small frequencies, the slope of 3Q0(ω) decreases until approximately zero as a plateau value is reached, starting at a PDI around 2 and higher.

  16. Decision curve analysis and external validation of the postoperative Karakiewicz nomogram for renal cell carcinoma based on a large single-center study cohort.

    PubMed

    Zastrow, Stefan; Brookman-May, Sabine; Cong, Thi Anh Phuong; Jurk, Stanislaw; von Bar, Immanuel; Novotny, Vladimir; Wirth, Manfred

    2015-03-01

    To predict outcome of patients with renal cell carcinoma (RCC) who undergo surgical therapy, risk models and nomograms are valuable tools. External validation on independent datasets is crucial for evaluating accuracy and generalizability of these models. The objective of the present study was to externally validate the postoperative nomogram developed by Karakiewicz et al. for prediction of cancer-specific survival. A total of 1,480 consecutive patients with a median follow-up of 82 months (IQR 46-128) were included into this analysis with 268 RCC-specific deaths. Nomogram-estimated survival probabilities were compared with survival probabilities of the actual cohort, and concordance indices were calculated. Calibration plots and decision curve analyses were used for evaluating calibration and clinical net benefit of the nomogram. Concordance between predictions of the nomogram and survival rates of the cohort was 0.911 after 12, 0.909 after 24 months and 0.896 after 60 months. Comparison of predicted probabilities and actual survival estimates with calibration plots showed an overestimation of tumor-specific survival based on nomogram predictions of high-risk patients, although calibration plots showed a reasonable calibration for probability ranges of interest. Decision curve analysis showed a positive net benefit of nomogram predictions for our patient cohort. The postoperative Karakiewicz nomogram provides a good concordance in this external cohort and is reasonably calibrated. It may overestimate tumor-specific survival in high-risk patients, which should be kept in mind when counseling patients. A positive net benefit of nomogram predictions was proven.

  17. Larger Optics and Improved Calibration Techniques for Small Satellite Observations with the ERAU OSCOM System

    NASA Astrophysics Data System (ADS)

    Bilardi, S.; Barjatya, A.; Gasdia, F.

    OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.

  18. X-ray light curves of active galactic nuclei are phase incoherent

    NASA Technical Reports Server (NTRS)

    Krolik, Julian; Done, Chris; Madejski, Grzegorz

    1993-01-01

    We compute the Fourier phase spectra for the light curves of five low-luminosity active galactic nuclei observed by EXOSAT. There is no statistically significant phase coherence in any of them. This statement is equivalent, subject to a technical caveat, to a demonstration that their fluctuation statistics are Gaussian. Models in which the X-ray output is controlled wholly by a unitary process undergoing a nonlinear limit cycle are therefore ruled out, while models with either a large number of randomly excited independent oscillation modes or nonlinearly interacting spatially dependent oscillations are favored. We also demonstrate how the degree of phase coherence in light curve fluctuations influences the application of causality bounds on internal length scales.

  19. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  20. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  1. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  2. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  3. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  4. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2016-04-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  5. [Application of AOTF in spectral analysis. 1. Hardware and software designs for the self-constructed visible AOTF spectrophotometer].

    PubMed

    He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia

    2002-02-01

    A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.

  6. Use of Two-Part Regression Calibration Model to Correct for Measurement Error in Episodically Consumed Foods in a Single-Replicate Study Design: EPIC Case Study

    PubMed Central

    Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487

  7. Nonlinear Magnus-induced dynamics and Shapiro spikes for ac and dc driven skyrmions on periodic quasi-one-dimensional substrates

    NASA Astrophysics Data System (ADS)

    Reichhardt, Charles; Reichhardt, Cynthia J. Olson

    We numerically examine skyrmions interacting with a periodic quasi-one-dimensional substrate. When we drive the skyrmions perpendicular to the substrate periodicity direction, a rich variety of nonlinear Magnus-induced effects arise, in contrast to an overdamped system that shows only a linear velocity-force curve for this geometry. The skyrmion velocity-force curve is strongly nonlinear and we observe a Magnus-induced speed-up effect when the pinning causes the Magnus velocity response to align with the dissipative response. At higher applied drives these components decouple, resulting in strong negative differential conductivity. For skyrmions under combined ac and dc driving, we find a new class of phase locking phenomena in which the velocity-force curves contain a series of what we call Shapiro spikes, distinct from the Shapiro steps observed in overdamped systems. There are also regimes in which the skyrmion moves in the direction opposite to the applied dc drive to give negative mobility.

  8. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    PubMed

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.

  9. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  10. Learning curve of single port laparoscopic cholecystectomy determined using the non-linear ordinary least squares method based on a non-linear regression model: An analysis of 150 consecutive patients.

    PubMed

    Han, Hyung Joon; Choi, Sae Byeol; Park, Man Sik; Lee, Jin Suk; Kim, Wan Bae; Song, Tae Jin; Choi, Sang Yong

    2011-07-01

    Single port laparoscopic surgery has come to the forefront of minimally invasive surgery. For those familiar with conventional techniques, however, this type of operation demands a different type of eye/hand coordination and involves unfamiliar working instruments. Herein, the authors describe the learning curve and the clinical outcomes of single port laparoscopic cholecystectomy for 150 consecutive patients with benign gallbladder disease. All patients underwent single port laparoscopic cholecystectomy using a homemade glove port by one of five operators with different levels of experiences of laparoscopic surgery. The learning curve for each operator was fitted using the non-linear ordinary least squares method based on a non-linear regression model. Mean operating time was 77.6 ± 28.5 min. Fourteen patients (6.0%) were converted to conventional laparoscopic cholecystectomy. Complications occurred in 15 patients (10.0%), as follows: bile duct injury (n = 2), surgical site infection (n = 8), seroma (n = 2), and wound pain (n = 3). One operator achieved a learning curve plateau at 61.4 min per procedure after 8.5 cases and his time improved by 95.3 min as compared with initial operation time. Younger surgeons showed significant decreases in mean operation time and achieved stable mean operation times. In particular, younger surgeons showed significant decreases in operation times after 20 cases. Experienced laparoscopic surgeons can safely perform single port laparoscopic cholecystectomy using conventional or angled laparoscopic instruments. The present study shows that an operator can overcome the single port laparoscopic cholecystectomy learning curve in about eight cases.

  11. Evaluating the predictive accuracy and the clinical benefit of a nomogram aimed to predict survival in node-positive prostate cancer patients: External validation on a multi-institutional database.

    PubMed

    Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio

    2018-04-06

    To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.

  12. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  13. GIADA: extended calibration activities before the comet encounter

    NASA Astrophysics Data System (ADS)

    Accolla, Mario; Sordini, Roberto; Della Corte, Vincenzo; Ferrari, Marco; Rotundi, Alessandra

    2014-05-01

    The Grain Impact Analyzer and Dust Accumulator - GIADA - is one of the payloads on-board Rosetta Orbiter. Its three detection sub-systems are able to measure the speed, the momentum, the mass, the optical cross section of single cometary grains and the dust flux ejected by the periodic comet 67P Churyumov-Gerasimenko. During the Hibernation phase of the Rosetta mission, we have performed a dedicated extended calibration activity on the GIADA Proto Flight Model (accommodated in a clean room in our laboratory) involving two of three sub-systems constituting GIADA, i.e. the Grain Detection System (GDS) and the Impact Sensor (IS). Our aim is to carry out a new set of response curves for these two subsystems and to correlate them with the calibration curves obtained in 2002 for the GIADA payload onboard the Rosetta spacecraft, in order to improve the interpretation of the forthcoming scientific data. For the extended calibration we have dropped or shot into GIADA PFM a statistically relevant number of grains (i.e. about 1 hundred), acting as cometary dust analogues. We have studied the response of the GDS and IS as a function of grain composition, size and velocity. Different terrestrial materials were selected as cometary analogues according to the more recent knowledge gained through the analyses of Interplanetary Dust Particles and cometary samples returned from comet 81P/Wild 2 (Stardust mission). Therefore, for each material, we have produced grains with sizes ranging from 20-500 μm in diameter, that were characterized by FESEM and micro IR spectroscopy. Therefore, the grains were shot into GIADA PFM with speed ranging between 1 and 100 ms-1. Indeed, according to the estimation reported in Fink & Rubin (2012), this range is representative of the dust particle velocity expected at the comet scenario and lies within the GIADA velocity sensitivity (i.e. 1-100 ms-1 for GDSand 1-300 ms-1for GDS+IS 1-300 ms-1). The response curves obtained using the data collected during the GIADA PFM extended calibration will be linked to the on-ground calibration data collected during the instrument qualification campaign (performed both on Flight and Spare Models, in 2002). The final aim is to rescale the Extended Calibration data obtained with the GIADA PFM to GIADA presently onboard the Rosetta spacecraft. In this work we present the experimental procedures and the setup used for the calibration activities, particularly focusing on the new response curves of GDS and IS sub-systems obtained for the different cometary dust analogues. These curves will be critical for the future interpretation of scientific data. Fink, U. & Rubin, M. (2012), The calculation of Afρ and mass loss rate for comets, Icarus, Volume 221, issue 2, p. 721-734

  14. Transducer Workshop (17th) Held in San Diego, California on June 22-24, 1993

    DTIC Science & Technology

    1993-06-01

    weight in a drop tower, such as the primer tester shown in figure 1. The calibration procedure must be repeated for each lot of copper inserts, and small...force vs. time curve (i.e impulse = area unxer the curve). The FPyF can be used in the primer tester (shown in figure 1) as well as in a weapon...microphones. Plstonphone Output 124 dB, 250 Hz DEAD WEIGHT TESTIER USED AS A PRESSURE RELEASE CALIBRATOR The dead weight tester is designed and most

  15. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2004 and 2005 Tests)

    NASA Technical Reports Server (NTRS)

    Arrington, E. Allen; Pastor, Christine M.; Gonsalez, Jose C.; Curry, Monroe R., III

    2010-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel was completed in 2004 following the replacement of the inlet guide vanes upstream of the tunnel drive system and improvement to the facility total temperature instrumentation. This calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT. The 2004 test was also the first to use the 2-D RTD array, an improved total temperature calibration measurement platform.

  16. Improved quantification of important beer quality parameters based on nonlinear calibration methods applied to FT-MIR spectra.

    PubMed

    Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus

    2017-01-01

    During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .

  17. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.

    Purpose: In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity inmore » scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was {+-}7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.« less

  19. Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less

  20. A computational model-based validation of Guyton's analysis of cardiac output and venous return curves

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.; Mark, R. G.

    2002-01-01

    Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.

  1. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  2. Prospects of second generation artificial intelligence tools in calibration of chemical sensors.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala

    2005-05-01

    Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.

  3. Research on a high-precision calibration method for tunable lasers

    NASA Astrophysics Data System (ADS)

    Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai

    2018-03-01

    Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.

  4. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  5. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  6. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  7. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  8. Linking Parameters Estimated with the Generalized Graded Unfolding Model: A Comparison of the Accuracy of Characteristic Curve Methods

    ERIC Educational Resources Information Center

    Anderson Koenig, Judith; Roberts, James S.

    2007-01-01

    Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…

  9. Dynamical evolution of topology of large-scale structure. [in distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Park, Changbom; Gott, J. R., III

    1991-01-01

    The nonlinear effects of statistical biasing and gravitational evolution on the genus are studied. The biased galaxy subset is picked for the first time by actually identifying galaxy-sized peaks above a fixed threshold in the initial conditions, and their subsequent evolution is followed. It is found that in the standard cold dark matter (CDM) model the statistical biasing in the locations of galaxies produces asymmetry in the genus curve and coupling with gravitational evolution gives rise to a shift in the genus curve to the left in moderately nonlinear regimes. Gravitational evolution alone reduces the amplitude of the genus curve due to strong phase correlations in the density field and also produces asymmetry in the curve. Results on the genus of the mass density field for both CDM and hot dark matter models are consistent with previous work by Melott, Weinberg, and Gott (1987).

  10. Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei

    2013-08-01

    Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.

  11. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  12. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner

    PubMed Central

    McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.; Van Lysel, Michael S.

    2011-01-01

    Purpose: In this study, newly formulated XR-RV3 GafChromic® film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was ±7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases. PMID:21626925

  13. A nonlinear propagation model-based phase calibration technique for membrane hydrophones.

    PubMed

    Cooling, Martin P; Humphrey, Victor F

    2008-01-01

    A technique for the phase calibration of membrane hydrophones in the frequency range up to 80 MHz is described. This is achieved by comparing measurements and numerical simulation of a nonlinearly distorted test field. The field prediction is obtained using a finite-difference model that solves the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation in the frequency domain. The measurements are made in the far field of a 3.5 MHz focusing circular transducer in which it is demonstrated that, for the high drive level used, spatial averaging effects due to the hydrophone's finite-receive area are negligible. The method provides a phase calibration of the hydrophone under test without the need for a device serving as a phase response reference, but it requires prior knowledge of the amplitude sensitivity at the fundamental frequency. The technique is demonstrated using a 50-microm thick bilaminar membrane hydrophone, for which the results obtained show functional agreement with predictions of a hydrophone response model. Further validation of the results is obtained by application of the response to the measurement of the high amplitude waveforms generated by a modern biomedical ultrasonic imaging system. It is demonstrated that full deconvolution of the calculated complex frequency response of a nonideal hydrophone results in physically realistic measurements of the transmitted waveforms.

  14. A non-linear piezoelectric actuator calibration using N-dimensional Lissajous figure

    NASA Astrophysics Data System (ADS)

    Albertazzi, A.; Viotti, M. R.; Veiga, C. L. N.; Fantin, A. V.

    2016-08-01

    Piezoelectric translators (PZTs) are very often used as phase shifters in interferometry. However, they typically present a non-linear behavior and strong hysteresis. The use of an additional resistive or capacitive sensor make possible to linearize the response of the PZT by feedback control. This approach works well, but makes the device more complex and expensive. A less expensive approach uses a non-linear calibration. In this paper, the authors used data from at least five interferograms to form N-dimensional Lissajous figures to establish the actual relationship between the applied voltages and the resulting phase shifts [1]. N-dimensional Lissajous figures are formed when N sinusoidal signals are combined in an N-dimensional space, where one signal is assigned to each axis. It can be verified that the resulting Ndimensional ellipsis lays in a 2D plane. By fitting an ellipsis equation to the resulting 2D ellipsis it is possible to accurately compute the resulting phase value for each interferogram. In this paper, the relationship between the resulting phase shift and the applied voltage is simultaneously established for a set of 12 increments by a fourth degree polynomial. The results in speckle interferometry show that, after two or three interactions, the calibration error is usually smaller than 1°.

  15. On the long-term stability of calibration standards in different matrices.

    PubMed

    Kandić, A; Vukanac, I; Djurašević, M; Novković, D; Šešlak, B; Milošević, Z

    2012-09-01

    In order to assure Quality Control in accordance with ISO/IEC 17025, it was important, from metrological point of view, to examine the long-term stability of calibration standards previously prepared. Comprehensive reconsideration on efficiency curves with respect to the ageing of calibration standards is presented in this paper. The calibration standards were re-used after a period of 5 years and analysis of the results showed discrepancies in efficiency values. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  17. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.

    2014-05-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  18. The cytokinesis-blocked micronucleus assay: dose-response calibration curve, background frequency in the population and dose estimation.

    PubMed

    Rastkhah, E; Zakeri, F; Ghoranneviss, M; Rajabpour, M R; Farshidpour, M R; Mianji, F; Bayat, M

    2016-03-01

    An in vitro study of the dose responses of human peripheral blood lymphocytes was conducted with the aim of creating calibrated dose-response curves for biodosimetry measuring up to 4 Gy (0.25-4 Gy) of gamma radiation. The cytokinesis-blocked micronucleus (CBMN) assay was employed to obtain the frequencies of micronuclei (MN) per binucleated cell in blood samples from 16 healthy donors (eight males and eight females) in two age ranges of 20-34 and 35-50 years. The data were used to construct the calibration curves for men and women in two age groups, separately. An increase in micronuclei yield with the dose in a linear-quadratic way was observed in all groups. To verify the applicability of the constructed calibration curve, MN yields were measured in peripheral blood lymphocytes of two real overexposed subjects and three irradiated samples with unknown dose, and the results were compared with dose values obtained from measuring dicentric chromosomes. The comparison of the results obtained by the two techniques indicated a good agreement between dose estimates. The average baseline frequency of MN for the 130 healthy non-exposed donors (77 men and 55 women, 20-60 years old divided into four age groups) ranged from 6 to 21 micronuclei per 1000 binucleated cells. Baseline MN frequencies were higher for women and for the older age group. The results presented in this study point out that the CBMN assay is a reliable, easier and valuable alternative method for biological dosimetry.

  19. Non-linear behavior of fiber composite laminates

    NASA Technical Reports Server (NTRS)

    Hashin, Z.; Bagchi, D.; Rosen, B. W.

    1974-01-01

    The non-linear behavior of fiber composite laminates which results from lamina non-linear characteristics was examined. The analysis uses a Ramberg-Osgood representation of the lamina transverse and shear stress strain curves in conjunction with deformation theory to describe the resultant laminate non-linear behavior. A laminate having an arbitrary number of oriented layers and subjected to a general state of membrane stress was treated. Parametric results and comparison with experimental data and prior theoretical results are presented.

  20. Nonlinearly stacked low noise turbofan stator

    NASA Technical Reports Server (NTRS)

    Schuster, William B. (Inventor); Nolcheff, Nick A. (Inventor); Gunaraj, John A. (Inventor); Kontos, Karen B. (Inventor); Weir, Donald S. (Inventor)

    2009-01-01

    A nonlinearly stacked low noise turbofan stator vane having a characteristic curve that is characterized by a nonlinear sweep and a nonlinear lean is provided. The stator is in an axial fan or compressor turbomachinery stage that is comprised of a collection of vanes whose highly three-dimensional shape is selected to reduce rotor-stator and rotor-strut interaction noise while maintaining the aerodynamic and mechanical performance of the vane. The nonlinearly stacked low noise turbofan stator vane reduces noise associated with the fan stage of turbomachinery to improve environmental compatibility.

  1. Experiments on nonlinear acoustic landmine detection: Tuning curve studies of soil-mine and soil-mass oscillators

    NASA Astrophysics Data System (ADS)

    Korman, Murray S.; Witten, Thomas R.; Fenneman, Douglas J.

    2004-10-01

    Donskoy [SPIE Proc. 3392, 211-217 (1998); 3710, 239-246 (1999)] has suggested a nonlinear technique that is insensitive to relatively noncompliant targets that can detect an acoustically compliant buried mine. Airborne sound at two primary frequencies eventually causes interactions with the soil and mine generating combination frequencies that can affect the vibration velocity at the surface. In current experiments, f1 and f2 are closely spaced near a mine resonance and a laser Doppler vibrometer profiles the surface. In profiling, certain combination frequencies have a much greater contrast ratio than the linear profiles at f1 and f2-but off the mine some nonlinearity exists. Near resonance, the bending (a softening) of a family of tuning curves (over the mine) exhibits a linear relationship between peak velocity and corresponding frequency, which is characteristic of nonlinear mesoscopic elasticity effects that are observed in geomaterials like rocks or granular media. Results are presented for inert plastic VS 1.6, VS 2.2 and M14 mines buried 3.6 cm in loose soil. Tuning curves for a rigid mass plate resting on a soil layer exhibit similar results, suggesting that nonresonant conditions off the mine are desirable. [Work supported by U.S. Army RDECOM, CERDEC, NVESD, Fort Belvoir, VA.

  2. Nonlinear resonance of the rotating circular plate under static loads in magnetic field

    NASA Astrophysics Data System (ADS)

    Hu, Yuda; Wang, Tong

    2015-11-01

    The rotating circular plate is widely used in mechanical engineering, meanwhile the plates are often in the electromagnetic field in modern industry with complex loads. In order to study the resonance of a rotating circular plate under static loads in magnetic field, the nonlinear vibration equation about the spinning circular plate is derived according to Hamilton principle. The algebraic expression of the initial deflection and the magneto elastic forced disturbance differential equation are obtained through the application of Galerkin integral method. By mean of modified Multiple scale method, the strongly nonlinear amplitude-frequency response equation in steady state is established. The amplitude frequency characteristic curve and the relationship curve of amplitude changing with the static loads and the excitation force of the plate are obtained according to the numerical calculation. The influence of magnetic induction intensity, the speed of rotation and the static loads on the amplitude and the nonlinear characteristics of the spinning plate are analyzed. The proposed research provides the theory reference for the research of nonlinear resonance of rotating plates in engineering.

  3. Calibration of the Concorde radiation detection instrument and measurements at SST altitude.

    DOT National Transportation Integrated Search

    1971-06-01

    Performance tests were carried out on a solar cosmic radiation detection instrument developed for the Concorde SST. The instrument calibration curve (log dose-rate vs instrument reading) was reasonably linear from 0.004 to 1 rem/hr for both gamma rad...

  4. Feasibility analysis on integration of luminous environment measuring and design based on exposure curve calibration

    NASA Astrophysics Data System (ADS)

    Zou, Yuan; Shen, Tianxing

    2013-03-01

    Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.

  5. Non-linear dynamics of stable carbon and hydrogen isotope signatures based on a biological kinetic model of aerobic enzymatic methane oxidation.

    PubMed

    Vavilin, Vasily A; Rytov, Sergey V; Shim, Natalia; Vogt, Carsten

    2016-06-01

    The non-linear dynamics of stable carbon and hydrogen isotope signatures during methane oxidation by the methanotrophic bacteria Methylosinus sporium strain 5 (NCIMB 11126) and Methylocaldum gracile strain 14 L (NCIMB 11912) under copper-rich (8.9 µM Cu(2+)), copper-limited (0.3 µM Cu(2+)) or copper-regular (1.1 µM Cu(2+)) conditions has been described mathematically. The model was calibrated by experimental data of methane quantities and carbon and hydrogen isotope signatures of methane measured previously in laboratory microcosms reported by Feisthauer et al. [ 1 ] M. gracile initially oxidizes methane by a particulate methane monooxygenase and assimilates formaldehyde via the ribulose monophosphate pathway, whereas M. sporium expresses a soluble methane monooxygenase under copper-limited conditions and uses the serine pathway for carbon assimilation. The model shows that during methane solubilization dominant carbon and hydrogen isotope fractionation occurs. An increase of biomass due to growth of methanotrophs causes an increase of particulate or soluble monooxygenase that, in turn, decreases soluble methane concentration intensifying methane solubilization. The specific maximum rate of methane oxidation υm was proved to be equal to 4.0 and 1.3 mM mM(-1) h(-1) for M. sporium under copper-rich and copper-limited conditions, respectively, and 0.5 mM mM(-1) h(-1) for M. gracile. The model shows that methane oxidation cannot be described by traditional first-order kinetics. The kinetic isotope fractionation ceases when methane concentrations decrease close to the threshold value. Applicability of the non-linear model was confirmed by dynamics of carbon isotope signature for carbon dioxide that was depleted and later enriched in (13)C. Contrasting to the common Rayleigh linear graph, the dynamic curves allow identifying inappropriate isotope data due to inaccurate substrate concentration analyses. The non-linear model pretty adequately described experimental data presented in the two-dimensional plot of hydrogen versus carbon stable isotope signatures.

  6. A formulation of tissue- and water-equivalent materials using the stoichiometric analysis method for CT-number calibration in radiotherapy treatment planning.

    PubMed

    Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A

    2012-03-07

    Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.

  7. V-I characteristics of X-ray conductivity and UV photoconductivity of ZnSe crystals

    NASA Astrophysics Data System (ADS)

    Degoda, V. Ya.; Alizadeh, M.; Kovalenko, N. O.; Pavlova, N. Yu.

    2018-02-01

    This article outlines the resulting experimental V-I curves for high resistance ZnSe single crystals at temperatures of 8, 85, 295, and 420 K under three intensities of X-ray and UV excitations (hvUV > Eg). This paper considers the major factors that affect the nonlinearity in the V-I curves of high resistance ZnSe. We observe superlinear dependences at low temperatures, shifting to sublinear at room temperature and above. However, at all temperatures, we have initial linear areas of V-I curves. Using the initial linear areas of these characteristics, we obtained the lifetime values of free electrons and their mobility. The comparison of the conductivity values of X-ray and UV excitations made it possible to reveal the fact that most of the electron-hole pairs recombine in the local generation area, creating a scintillation pulse, while not participating in the conductivity. When analyzing the nonlinearity of the V-I curve, two new processes were considered in the first approximation: an increase in the average thermal velocity of electrons under the action of the electric field and the selectivity of the velocity direction of the electron upon delocalization from the traps under the Poole-Frenkel effect. It is assumed that the observed nonlinearity is due to the photoinduced contact difference in potentials.

  8. Geomorphically based predictive mapping of soil thickness in upland watersheds

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.; Rasmussen, Craig

    2009-09-01

    The hydrologic response of upland watersheds is strongly controlled by soil (regolith) thickness. Despite the need to quantify soil thickness for input into hydrologic models, there is currently no widely used, geomorphically based method for doing so. In this paper we describe and illustrate a new method for predictive mapping of soil thicknesses using high-resolution topographic data, numerical modeling, and field-based calibration. The model framework works directly with input digital elevation model data to predict soil thicknesses assuming a long-term balance between soil production and erosion. Erosion rates in the model are quantified using one of three geomorphically based sediment transport models: nonlinear slope-dependent transport, nonlinear area- and slope-dependent transport, and nonlinear depth- and slope-dependent transport. The model balances soil production and erosion locally to predict a family of solutions corresponding to a range of values of two unconstrained model parameters. A small number of field-based soil thickness measurements can then be used to calibrate the local value of those unconstrained parameters, thereby constraining which solution is applicable at a particular study site. As an illustration, the model is used to predictively map soil thicknesses in two small, ˜0.1 km2, drainage basins in the Marshall Gulch watershed, a semiarid drainage basin in the Santa Catalina Mountains of Pima County, Arizona. Field observations and calibration data indicate that the nonlinear depth- and slope-dependent sediment transport model is the most appropriate transport model for this site. The resulting framework provides a generally applicable, geomorphically based tool for predictive mapping of soil thickness using high-resolution topographic data sets.

  9. Sonar gas flux estimation by bubble insonification: application to methane bubble flux from seep areas in the outer Laptev Sea

    NASA Astrophysics Data System (ADS)

    Leifer, Ira; Chernykh, Denis; Shakhova, Natalia; Semiletov, Igor

    2017-06-01

    Sonar surveys provide an effective mechanism for mapping seabed methane flux emissions, with Arctic submerged permafrost seepage having great potential to significantly affect climate. We created in situ engineered bubble plumes from 40 m depth with fluxes spanning 0.019 to 1.1 L s-1 to derive the in situ calibration curve (Q(σ)). These nonlinear curves related flux (Q) to sonar return (σ) for a multibeam echosounder (MBES) and a single-beam echosounder (SBES) for a range of depths. The analysis demonstrated significant multiple bubble acoustic scattering - precluding the use of a theoretical approach to derive Q(σ) from the product of the bubble σ(r) and the bubble size distribution where r is bubble radius. The bubble plume σ occurrence probability distribution function (Ψ(σ)) with respect to Q found Ψ(σ) for weak σ well described by a power law that likely correlated with small-bubble dispersion and was strongly depth dependent. Ψ(σ) for strong σ was largely depth independent, consistent with bubble plume behavior where large bubbles in a plume remain in a focused core. Ψ(σ) was bimodal for all but the weakest plumes. Q(σ) was applied to sonar observations of natural arctic Laptev Sea seepage after accounting for volumetric change with numerical bubble plume simulations. Simulations addressed different depths and gases between calibration and seep plumes. Total mass fluxes (Qm) were 5.56, 42.73, and 4.88 mmol s-1 for MBES data with good to reasonable agreement (4-37 %) between the SBES and MBES systems. The seepage flux occurrence probability distribution function (Ψ(Q)) was bimodal, with weak Ψ(Q) in each seep area well described by a power law, suggesting primarily minor bubble plumes. The seepage-mapped spatial patterns suggested subsurface geologic control attributing methane fluxes to the current state of subsea permafrost.

  10. Investigation of absolute and relative response for three different liquid chromatography/tandem mass spectrometry systems; the impact of ionization and detection saturation.

    PubMed

    Nilsson, Lars B; Skansen, Patrik

    2012-06-30

    The investigations in this article were triggered by two observations in the laboratory; for some liquid chromatography/tandem mass spectrometry (LC/MS/MS) systems it was possible to obtain linear calibration curves for extreme concentration ranges and for some systems seemingly linear calibration curves gave good accuracy at low concentrations only when using a quadratic regression function. The absolute and relative responses were tested for three different LC/MS/MS systems by injecting solutions of a model compound and a stable isotope labeled internal standard. The analyte concentration range for the solutions was 0.00391 to 500 μM (128,000×), giving overload of the chromatographic column at the highest concentrations. The stable isotope labeled internal standard concentration was 0.667 μM in all samples. The absolute response per concentration unit decreased rapidly as higher concentrations were injected. The relative response, the ratio for the analyte peak area to the internal standard peak area, per concentration unit was calculated. For system 1, the ionization process was found to limit the response and the relative response per concentration unit was constant. For systems 2 and 3, the ion detection process was the limiting factor resulting in decreasing relative response at increasing concentrations. For systems behaving like system 1, simple linear regression can be used for any concentration range while, for systems behaving like systems 2 and 3, non-linear regression is recommended for all concentration ranges. Another consequence is that the ionization capacity limited systems will be insensitive to matrix ion suppression when an ideal internal standard is used while the detection capacity limited systems are at risk of giving erroneous results at high concentrations if the matrix ion suppression varies for different samples in a run. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Radiochromic film calibration for the RQT9 quality beam

    NASA Astrophysics Data System (ADS)

    Costa, K. C.; Gomez, A. M. L.; Alonso, T. C.; Mourao, A. P.

    2017-11-01

    When ionizing radiation interacts with matter it generates energy deposition. Radiation dosimetry is important for medical applications of ionizing radiation due to the increasing demand for diagnostic radiology and radiotherapy. Different dosimetry methods are used and each one has its advantages and disadvantages. The film is a dose measurement method that records the energy deposition by the darkening of its emulsion. Radiochromic films have a little visible light sensitivity and respond better to ionizing radiation exposure. The aim of this study is to obtain the resulting calibration curve by the irradiation of radiochromic film strips, making it possible to relate the darkening of the film with the absorbed dose, in order to measure doses in experiments with X-ray beam of 120 kV, in computed tomography (CT). Film strips of GAFCHROMIC XR-QA2 were exposed according to RQT9 reference radiation, which defines an X-ray beam generated from a voltage of 120 kV. Strips were irradiated in "Laboratório de Calibração de Dosímetros do Centro de Desenvolvimento da Tecnologia Nuclear" (LCD / CDTN) at a dose range of 5-30 mGy, corresponding to the range values commonly used in CT scans. Digital images of the irradiated films were analyzed by using the ImageJ software. The darkening responses on film strips according to the doses were observed and they allowed obtaining the corresponding numeric values to the darkening for each specific dose value. From the numerical values of darkening, a calibration curve was obtained, which correlates the darkening of the film strip with dose values in mGy. The calibration curve equation is a simplified method for obtaining absorbed dose values using digital images of radiochromic films irradiated. With the calibration curve, radiochromic films may be applied on dosimetry in experiments on CT scans using X-ray beam of 120 kV, in order to improve CT acquisition image processes.

  12. SU-E-T-137: The Response of TLD-100 in Mixed Fields of Photons and Electrons.

    PubMed

    Lawless, M; Junell, S; Hammer, C; DeWerd, L

    2012-06-01

    Thermoluminescent dosimeters are used routinely for dosimetric measurements of photon and electron fields. However, no work has been published characterizing TLDs for use in combined photon and electron fields. This work investigates the response of TLD-100 (LiF:Mg,Ti) in mixed fields of photon and electron beam qualities. TLDs were irradiated in a 6 MV photon beam, 6 MeV electron beam, and a NIST traceable cobalt-60 beam. TLDs were also irradiated in a mixed field of the electron and photon beams. All irradiations were normalized to absorbed dose to water as defined in the AAPM TG-51 report. The average response per dose (nC/Gy) for each linac beam quality was normalized to the average response per dose of the TLDs irradiated by the cobalt-60 standard.Irradiations were performed in a water tank and a Virtual Water™ phantom. Two TLD dose calibration curves for determining absorbed dose to water were generated using photon and electron field TLD response data. These individual beam quality dose calibration curves were applied to the TLDs irradiated in the mixed field. The TLD response in the mixed field was less sensitive than the response in the photon field and more sensitive than the response in the electron field. TLD determination of dose in the mixed field using the dose calibration curve generated by TLDs irradiated by photons resulted in an underestimation of the delivered dose, while the use of a dose calibration curve generated using electrons resulted in an overestimation of the delivered dose. The relative response of TLD-100 in mixed fields fell consistently between the photon nd electron relative responses. When using TLD-100 in mixed fields, the user must account for this intermediate response to avoid an over- or underestimation of the dose due to calibration in a single photon or electron field. © 2012 American Association of Physicists in Medicine.

  13. A statistical method for estimating rates of soil development and ages of geologic deposits: A design for soil-chronosequence studies

    USGS Publications Warehouse

    Switzer, P.; Harden, J.W.; Mark, R.K.

    1988-01-01

    A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.

  14. The nonlinear interaction of Tollmien-Schlichting waves and Taylor-Goertler vortices in curved channel flows

    NASA Technical Reports Server (NTRS)

    Hall, P.; Smith, F. T.

    1987-01-01

    It is known that a viscous fluid flow with curved streamlines can support both Tollmien-Schlichting and Taylor-Goertler instabilities. In a situation where both modes are possible on the basis of linear theory a nonlinear theory must be used to determine the effect of the interaction of the instabilities. The details of this interaction are of practical importance because of its possible catastrophic effects on mechanisms used for laminar flow control. This interaction is studied in the context of fully developed flows in curved channels. A part form technical differences associated with boundary layer growth the structures of the instabilities in this flow are very similar to those in the practically more important external boundary layer situation. The interaction is shown to have two distinct phases depending on the size of the disturbances. At very low amplitudes two oblique Tollmein-Schlichting waves interact with a Goertler vortex in such a manner that the amplitudes become infinite at a finite time. This type of interaction is described by ordinary differential amplitude equations with quadratic nonlinearities.

  15. Analysis of Classes of Superlinear Semipositone Problems with Nonlinear Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Morris, Quinn A.

    We study positive radial solutions for classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We consider p-Laplacian problems (p > 1) with reaction terms which are superlinear at infinity and semipositone. In the case p = 2, using variational methods, we establish the existence of a solution, and via detailed analysis of the Green's function, we prove the positivity of the solution. In the case p ≠ 2, we again use variational methods to establish the existence of a solution, but the positivity of the solution is achieved via sophisticated a priori estimates. In the case p ≠ 2, the Green's function analysis is no longer available. Our results significantly enhance the literature on superlinear semipositone problems. Finally, we provide algorithms for the numerical generation of exact bifurcation curves for one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and using nonlinear solvers in Mathematica, generate bifurcation curves. In the nonautonomous case, we employ shooting methods in Mathematica to generate bifurcation curves.

  16. Third order nonlinearity in pulsed laser deposited LiNbO{sub 3} thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumuluri, Anil; Rapolu, Mounika; Rao, S. Venugopal, E-mail: kcjrsp@uohyd.ernet.in, E-mail: svrsp@uohyd.ernet.in

    2016-05-06

    Lithium niobate (LiNbO{sub 3}) thin films were prepared using pulsed laser deposition technique. Structural properties of the same were examined from XRD and optical band gap of the thin films were measured from transmittance spectra recorded using UV-Visible spectrophotometer. Nonlinear optical properties of the thin films were recorded using Z-Scan technique. The films were exhibiting third order nonlinearity and their corresponding two photon absorption, nonlinear refractive index, real and imaginary part of nonlinear susceptibility were calculated from open aperture and closed aperture transmission curves. From these studies, it suggests that these films have potential applications in nonlinear optical devices.

  17. Energy dispersive X-ray fluorescence (EDXRF) equipment calibration for multielement analysis of soil and rock samples

    NASA Astrophysics Data System (ADS)

    de Moraes, Alex Silva; Tech, Lohane; Melquíades, Fábio Luiz; Bastos, Rodrigo Oliveira

    2014-11-01

    Considering the importance to understand the behavior of the elements on different natural and/or anthropic processes, this study had as objective to verify the accuracy of a multielement analysis method for rocks characterization by using soil standards as calibration reference. An EDXRF equipment was used. The analyses were made on samples doped with known concentration of Mn, Zn, Rb, Sr and Zr, for the obtainment of the calibration curves, and on a certified rock sample to check the accuracy of the analytical curves. Then, a set of rock samples from Rio Bonito, located in Figueira city, Paraná State, Brazil, were analyzed. The concentration values obtained, in ppm, for Mn, Rb, Sr and Zr varied, respectively, from 175 to 1084, 7.4 to 268, 28 to 2247 and 15 to 761.

  18. Nonlinear Circuit Concepts -- An Elementary Experiment.

    ERIC Educational Resources Information Center

    Matolyak, J.; And Others

    1983-01-01

    Describes equipment and procedures for an experiment using diodes to introduce non-linear electronic devices in a freshman physics laboratory. The experiment involves calculation and plotting of the characteristic-curve and load-line to predict the operating point and compare prediction to experimentally determined values. Background information…

  19. TBA-like integral equations from quantized mirror curves

    NASA Astrophysics Data System (ADS)

    Okuyama, Kazumi; Zakany, Szabolcs

    2016-03-01

    Quantizing the mirror curve of certain toric Calabi-Yau (CY) three-folds leads to a family of trace class operators. The resolvent function of these operators is known to encode topological data of the CY. In this paper, we show that in certain cases, this resolvent function satisfies a system of non-linear integral equations whose structure is very similar to the Thermodynamic Bethe Ansatz (TBA) systems. This can be used to compute spectral traces, both exactly and as a semiclassical expansion. As a main example, we consider the system related to the quantized mirror curve of local P2. According to a recent proposal, the traces of this operator are determined by the refined BPS indices of the underlying CY. We use our non-linear integral equations to test that proposal.

  20. Inferring nonlinear gene regulatory networks from gene expression data based on distance correlation.

    PubMed

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference.

  1. Inferring Nonlinear Gene Regulatory Networks from Gene Expression Data Based on Distance Correlation

    PubMed Central

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference. PMID:24551058

  2. Stationary waves on nonlinear quantum graphs. II. Application of canonical perturbation theory in basic graph structures.

    PubMed

    Gnutzmann, Sven; Waltner, Daniel

    2016-12-01

    We consider exact and asymptotic solutions of the stationary cubic nonlinear Schrödinger equation on metric graphs. We focus on some basic example graphs. The asymptotic solutions are obtained using the canonical perturbation formalism developed in our earlier paper [S. Gnutzmann and D. Waltner, Phys. Rev. E 93, 032204 (2016)2470-004510.1103/PhysRevE.93.032204]. For closed example graphs (interval, ring, star graph, tadpole graph), we calculate spectral curves and show how the description of spectra reduces to known characteristic functions of linear quantum graphs in the low-intensity limit. Analogously for open examples, we show how nonlinear scattering of stationary waves arises and how it reduces to known linear scattering amplitudes at low intensities. In the short-wavelength asymptotics we discuss how genuine nonlinear effects may be described using the leading order of canonical perturbation theory: bifurcation of spectral curves (and the corresponding solutions) in closed graphs and multistability in open graphs.

  3. Experimental Determination of the HPGe Spectrometer Efficiency Calibration Curves for Various Sample Geometry for Gamma Energy from 50 keV to 2000 keV

    NASA Astrophysics Data System (ADS)

    Saat, Ahmad; Hamzah, Zaini; Yusop, Mohammad Fariz; Zainal, Muhd Amiruddin

    2010-07-01

    Detection efficiency of a gamma-ray spectrometry system is dependent upon among others, energy, sample and detector geometry, volume and density of the samples. In the present study the efficiency calibration curves of newly acquired (August 2008) HPGe gamma-ray spectrometry system was carried out for four sample container geometries, namely Marinelli beaker, disc, cylindrical beaker and vial, normally used for activity determination of gamma-ray from environmental samples. Calibration standards were prepared by using known amount of analytical grade uranium trioxide ore, homogenized in plain flour into the respective containers. The ore produces gamma-rays of energy ranging from 53 keV to 1001 keV. Analytical grade potassium chloride were prepared to determine detection efficiency of 1460 keV gamma-ray emitted by potassium isotope K-40. Plots of detection efficiency against gamma-ray energy for the four sample geometries were found to fit smoothly to a general form of ɛ = AΕa+BΕb, where ɛ is efficiency, Ε is energy in keV, A, B, a and b are constants that are dependent on the sample geometries. All calibration curves showed the presence of a "knee" at about 180 keV. Comparison between the four geometries showed that the efficiency of Marinelli beaker is higher than cylindrical beaker and vial, while cylindrical disk showed the lowest.

  4. Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification

    EPA Science Inventory

    Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...

  5. Holographic Entanglement Entropy, SUSY & Calibrations

    NASA Astrophysics Data System (ADS)

    Colgáin, Eoin Ó.

    2018-01-01

    Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.

  6. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method

    NASA Astrophysics Data System (ADS)

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration curve.

  7. On the performance of laser-induced breakdown spectroscopy for direct determination of trace metals in lubricating oils

    NASA Astrophysics Data System (ADS)

    Zheng, Lijuan; Cao, Fan; Xiu, Junshan; Bai, Xueshi; Motto-Ros, Vincent; Gilon, Nicole; Zeng, Heping; Yu, Jin

    2014-09-01

    Laser-induced breakdown spectroscopy (LIBS) provides a technique to directly determine metals in viscous liquids and especially in lubricating oils. A specific laser ablation configuration of a thin layer of oil applied on the surface of a pure aluminum target was used to evaluate the analytical figures of merit of LIBS for elemental analysis of lubricating oils. Among the analyzed oils, there were a certified 75cSt blank mineral oil, 8 virgin lubricating oils (synthetic, semi-synthetic, or mineral and of 2 different manufacturers), 5 used oils (corresponding to 5 among the 8 virgin oils), and a cooking oil. The certified blank oil and 4 virgin lubricating oils were spiked with metallo-organic standards to obtain laboratory reference samples with different oil matrix. We first established calibration curves for 3 elements, Fe, Cr, Ni, with the 5 sets of laboratory reference samples in order to evaluate the matrix effect by the comparison among the different oils. Our results show that generalized calibration curves can be built for the 3 analyzed elements by merging the measured line intensities of the 5 sets of spiked oil samples. Such merged calibration curves with good correlation of the merged data are only possible if no significant matrix effect affects the measurements of the different oils. In the second step, we spiked the remaining 4 virgin oils and the cooking oils with Fe, Cr and Ni. The accuracy and the precision of the concentration determination in these prepared oils were then evaluated using the generalized calibration curves. The concentrations of metallic elements in the 5 used lubricating oils were finally determined.

  8. General Nonlinear Ferroelectric Model v. Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Wen; Robbins, Josh

    2017-03-14

    The purpose of this software is to function as a generalized ferroelectric material model. The material model is designed to work with existing finite element packages by providing updated information on material properties that are nonlinear and dependent on loading history. The two major nonlinear phenomena this model captures are domain-switching and phase transformation. The software itself does not contain potentially sensitive material information and instead provides a framework for different physical phenomena observed within ferroelectric materials. The model is calibrated to a specific ferroelectric material through input parameters provided by the user.

  9. An Investigation of Acoustic Cavitation Produced by Pulsed Ultrasound

    DTIC Science & Technology

    1987-12-01

    S~ PVDF Hydrophone Sensitivity Calibration Curves C. DESCRIPTION OF TEST AND CALIBRATION TECHNIQUE We chose the reciprocity technique for calibration...NAVAL POSTGRADUATE SCHOOLN a n Monterey, Calif ornia ITHESIS AN INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED BY PULSED ULTRASOUND by Robert L. Bruce...INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED B~Y PULSED ULTRASOUND !2 PERSONAL AUTHOR(S) .RR~r. g~rtL_ 1DLJN, Rober- ., Jr. 13a TYPE OF REPORT )3b TIME

  10. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system.

    PubMed

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-21

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer's Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients.

  11. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    PubMed Central

    Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  12. Non-linear flow law of rockglacier creep determined from geomorphological observations: A case study from the Murtèl rockglacier (Engadin, SE Switzerland)

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Amschwand, Dominik; Gärtner-Roer, Isabelle

    2016-04-01

    Rockglaciers consist of unconsolidated rock fragments (silt/sand-rock boulders) with interstitial ice; hence their creep behavior (i.e., rheology) may deviate from the simple and well-known flow-laws for pure ice. Here we constrain the non-linear viscous flow law that governs rockglacier creep based on geomorphological observations. We use the Murtèl rockglacier (upper Engadin valley, SE Switzerland) as a case study, for which high-resolution digital elevation models (DEM), time-lapse borehole deformation data, and geophysical soundings exist that reveal the exterior and interior architecture and dynamics of the landform. Rockglaciers often feature a prominent furrow-and-ridge topography. For the Murtèl rockglacier, Frehner et al. (2015) reproduced the wavelength, amplitude, and distribution of the furrow-and-ridge morphology using a linear viscous (Newtonian) flow model. Arenson et al. (2002) presented borehole deformation data, which highlight the basal shear zone at about 30 m depth and a curved deformation profile above the shear zone. Similarly, the furrow-and-ridge morphology also exhibits a curved geometry in map view. Hence, the surface morphology and the borehole deformation data together describe a curved 3D geometry, which is close to, but not quite parabolic. We use a high-resolution DEM to quantify the curved geometry of the Murtèl furrow-and-ridge morphology. We then calculate theoretical 3D flow geometries using different non-linear viscous flow laws. By comparing them to the measured curved 3D geometry (i.e., both surface morphology and borehole deformation data), we can determine the most adequate flow-law that fits the natural data best. Linear viscous models result in perfectly parabolic flow geometries; non-linear creep leads to localized deformation at the sides and bottom of the rockglacier while the deformation in the interior and top are less intense. In other words, non-linear creep results in non-parabolic flow geometries. Both the linear (power-law exponent, n=1) and strongly non-linear models (n=10) do not match the measured data well. However, the moderately non-linear models (n=2-3) match the data quite well indicating that the creep of the Murtèl rockglacier is governed by a moderately non-linear viscous flow law with a power-law exponent close to the one of pure ice. Our results are crucial for improving existing numerical models of rockglacier flow that currently use simplified (i.e., linear viscous) flow-laws. References: Arenson L., Hoelzle M., and Springman S., 2002: Borehole deformation measurements and internal structure of some rock glaciers in Switzerland, Permafrost and Periglacial Processes 13, 117-135. Frehner M., Ling A.H.M., and Gärtner-Roer I., 2015: Furrow-and-ridge morphology on rockglaciers explained by gravity-driven buckle folding: A case study from the Murtèl rockglacier (Switzerland), Permafrost and Periglacial Processes 26, 57-66.

  13. Nonlinear Filtering Effects of Reservoirs on Flood Frequency Curves at the Regional Scale: RESERVOIRS FILTER FLOOD FREQUENCY CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wei; Li, Hong-Yi; Leung, L. Ruby

    Anthropogenic activities, e.g., reservoir operation, may alter the characteristics of Flood Frequency Curve (FFC) and challenge the basic assumption of stationarity used in flood frequency analysis. This paper presents a combined data-modeling analysis of the nonlinear filtering effects of reservoirs on the FFCs over the contiguous United States. A dimensionless Reservoir Impact Index (RII), defined as the total upstream reservoir storage capacity normalized by the annual streamflow volume, is used to quantify reservoir regulation effects. Analyses are performed for 388 river stations with an average record length of 50 years. The first two moments of the FFC, mean annual maximummore » flood (MAF) and coefficient of variations (CV), are calculated for the pre- and post-dam periods and compared to elucidate the reservoir regulation effects as a function of RII. It is found that MAF generally decreases with increasing RII but stabilizes when RII exceeds a threshold value, and CV increases with RII until a threshold value beyond which CV decreases with RII. The processes underlying the nonlinear threshold behavior of MAF and CV are investigated using three reservoir models with different levels of complexity. All models capture the non-linear relationships of MAF and CV with RII, suggesting that the basic flood control function of reservoirs is key to the non-linear relationships. The relative roles of reservoir storage capacity, operation objectives, available storage prior to a flood event, and reservoir inflow pattern are systematically investigated. Our findings may help improve flood-risk assessment and mitigation in regulated river systems at the regional scale.« less

  14. Temporal Gain Correction for X-Ray Calorimeter Spectrometers

    NASA Technical Reports Server (NTRS)

    Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.

    2016-01-01

    Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.

  15. Radiance calibration of the High Altitude Observatory white-light coronagraph on Skylab

    NASA Technical Reports Server (NTRS)

    Poland, A. I.; Macqueen, R. M.; Munro, R. H.; Gosling, J. T.

    1977-01-01

    The processing of over 35,000 photographs of the solar corona obtained by the white-light coronograph on Skylab is described. Calibration of the vast amount of data was complicated by temporal effects of radiation fog and latent image loss. These effects were compensated by imaging a calibration step wedge on each data frame. Absolute calibration of the wedge was accomplished through comparison with a set of previously calibrated glass opal filters. Analysis employed average characteristic curves derived from measurements of step wedges from many frames within a given camera half-load. The net absolute accuracy of a given radiance measurement is estimated to be 20%.

  16. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  17. Seismic rehabilitation of skewed and curved bridges using a new generation of buckling restrained braces : research brief.

    DOT National Transportation Integrated Search

    2016-12-01

    Damage to skewed and curved bridges during strong earthquakes is documented. This project investigates whether such damage could be mitigated by using buckling restrained braces. Nonlinear models show that using buckling restrained braces to mitigate...

  18. The Nonlinear Jaynes-Cummings Model for the Multiphoton Transition

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Jing; Lu, Jing-Bin; Zhang, Si-Qi; Liu, Ji-Ping; Li, Hong; Liang, Yu; Ma, Ji; Weng, Yi-Jiao; Zhang, Qi-Rui; Liu, Han; Zhang, Xiao-Ru; Wu, Xiang-Yao

    2018-01-01

    With the nonlinear Jaynes-Cummings model, we have studied the atom and light field quantum entanglement of multiphoton transition in nonlinear medium, and researched the effect of the transition photon number N and the nonlinear coefficient χ on the quantum entanglement degrees. We have given the quantum entanglement degrees curves with time evolution, we find when the transition photon number N increases, the entanglement degrees oscillation get faster. When the nonlinear coefficient α > 0, the entanglement degrees oscillation get quickly, the nonlinear term is disadvantage of the atom and light field entanglement, and when the nonlinear coefficient α < 0, the entanglement degrees oscillation get slow, the nonlinear term is advantage of the atom and light field entanglement. These results will have been used in the quantum communication and quantum information.

  19. Investigating quantitation of phosphorylation using MALDI-TOF mass spectrometry.

    PubMed

    Parker, Laurie; Engel-Hall, Aaron; Drew, Kevin; Steinhardt, George; Helseth, Donald L; Jabon, David; McMurry, Timothy; Angulo, David S; Kron, Stephen J

    2008-04-01

    Despite advances in methods and instrumentation for analysis of phosphopeptides using mass spectrometry, it is still difficult to quantify the extent of phosphorylation of a substrate because of physiochemical differences between unphosphorylated and phosphorylated peptides. Here we report experiments to investigate those differences using MALDI-TOF mass spectrometry for a set of synthetic peptides by creating calibration curves of known input ratios of peptides/phosphopeptides and analyzing their resulting signal intensity ratios. These calibration curves reveal subtleties in sequence-dependent differences for relative desorption/ionization efficiencies that cannot be seen from single-point calibrations. We found that the behaviors were reproducible with a variability of 5-10% for observed phosphopeptide signal. Although these data allow us to begin addressing the issues related to modeling these properties and predicting relative signal strengths for other peptide sequences, it is clear that this behavior is highly complex and needs to be further explored. John Wiley & Sons, Ltd

  20. Investigating quantitation of phosphorylation using MALDI-TOF mass spectrometry

    PubMed Central

    Parker, Laurie; Engel-Hall, Aaron; Drew, Kevin; Steinhardt, George; Helseth, Donald L.; Jabon, David; McMurry, Timothy; Angulo, David S.; Kron, Stephen J.

    2010-01-01

    Despite advances in methods and instrumentation for analysis of phosphopeptides using mass spectrometry, it is still difficult to quantify the extent of phosphorylation of a substrate due to physiochemical differences between unphosphorylated and phosphorylated peptides. Here we report experiments to investigate those differences using MALDI-TOF mass spectrometry for a set of synthetic peptides by creating calibration curves of known input ratios of peptides/phosphopeptides and analyzing their resulting signal intensity ratios. These calibration curves reveal subtleties in sequence-dependent differences for relative desorption/ionization efficiencies that cannot be seen from single-point calibrations. We found that the behaviors were reproducible with a variability of 5–10% for observed phosphopeptide signal. Although these data allow us to begin addressing the issues related to modeling these properties and predicting relative signal strengths for other peptide sequences, it is clear this behavior is highly complex and needs to be further explored. PMID:18064576

  1. Nonlinear vibration of an axially loaded beam carrying rigid bodies

    NASA Astrophysics Data System (ADS)

    Barry, O.

    2016-12-01

    This paper investigates the nonlinear vibration due to mid-plane stretching of an axially loaded simply supported beam carrying multiple rigid masses. Explicit expressions and closed form solutions of both linear and nonlinear analysis of the present vibration problem are presented for the first time. The validity of the analytical model is demonstrated using finite element analysis and via comparison with the result in the literature. Parametric studies are conducted to examine how the nonlinear frequency and frequency response curve are affected by tension, rotational inertia, and number of intermediate rigid bodies.

  2. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  3. Crash prediction modeling for curved segments of rural two-lane two-way highways in Utah.

    DOT National Transportation Integrated Search

    2015-10-01

    This report contains the results of the development of crash prediction models for curved segments of rural : two-lane two-way highways in the state of Utah. The modeling effort included the calibration of the predictive : model found in the Highway ...

  4. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  5. Symmetries for Galileons and DBI scalars on curved space

    DOE PAGES

    Goon, Garrett; Hinterbichler, Kurt; Trodden, Mark

    2011-07-08

    We introduced a general class of four-dimensional effective field theories which include curved space Galileons and DBI theories possessing nonlinear shift-like symmetries. These effective theories arise from purely gravitational actions and may prove relevant to the cosmology of both the early and late universe.

  6. A non-linear steady state characteristic performance curve for medium temperature solar energy collectors

    NASA Astrophysics Data System (ADS)

    Eames, P. C.; Norton, B.

    A numerical simulation model was employed to investigate the effects of ambient temperature and insolation on the efficiency of compound parabolic concentrating solar energy collectors. The limitations of presently-used collector performance characterization curves were investigated and a new approach proposed.

  7. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  8. Design of multiplex calibrant plasmids, their use in GMO detection and the limit of their applicability for quantitative purposes owing to competition effects.

    PubMed

    Debode, Frédéric; Marien, Aline; Janssen, Eric; Berben, Gilbert

    2010-03-01

    Five double-target multiplex plasmids to be used as calibrants for GMO quantification were constructed. They were composed of two modified targets associated in tandem in the same plasmid: (1) a part of the soybean lectin gene and (2) a part of the transgenic construction of the GTS40-3-2 event. Modifications were performed in such a way that each target could be amplified with the same primers as those for the original target from which they were derived but such that each was specifically detected with an appropriate probe. Sequence modifications were done to keep the parameters of the new target as similar as possible to those of its original sequence. The plasmids were designed to be used either in separate reactions or in multiplex reactions. Evidence is given that with each of the five different plasmids used in separate wells as a calibrant for a different copy number, a calibration curve can be built. When the targets were amplified together (in multiplex) and at different concentrations inside the same well, the calibration curves showed that there was a competition effect between the targets and this limits the range of copy numbers for calibration over a maximum of 2 orders of magnitude. Another possible application of multiplex plasmids is discussed.

  9. Practical application of electromyogram radiotelemetry: the suitability of applying laboratory-acquired calibration data to field data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, David R.; Brown, Richard S.; Lepla, Ken

    One of the practical problems with quantifying the amount of energy used by fish implanted with electromyogram (EMG) radio transmitters is that the signals emitted by the transmitter provide only a relative index of activity unless they are calibrated to the swimming speed of the fish. Ideally calibration would be conducted for each fish before it is released, but this is often not possible and calibration curves derived from more than one fish are used to interpret EMG signals from individuals which have not been calibrated. We tested the validity of this approach by comparing EMG data within three groupsmore » of three wild juvenile white sturgeon Acipenser transmontanus implanted with the same EMG radio transmitter. We also tested an additional six fish which were implanted with separate EMG transmitters. Within each group, a single EMG radio transmitter usually did not produce similar results in different fish. Grouping EMG signals among fish produced less accurate results than having individual EMG-swim speed relationships for each fish. It is unknown whether these differences were a result of different swimming performances among individual fish or inconsistencies in the placement or function of the EMG transmitters. In either case, our results suggest that caution should be used when applying calibration curves from one group of fish to another group of uncalibrated fish.« less

  10. An update on 'dose calibrator' settings for nuclides used in nuclear medicine.

    PubMed

    Bergeron, Denis E; Cessna, Jeffrey T

    2018-06-01

    Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.

  11. The use of megavoltage CT (MVCT) images for dose recomputations

    NASA Astrophysics Data System (ADS)

    Langen, K. M.; Meeks, S. L.; Poole, D. O.; Wagner, T. H.; Willoughby, T. R.; Kupelian, P. A.; Ruchala, K. J.; Haimerl, J.; Olivera, G. H.

    2005-09-01

    Megavoltage CT (MVCT) images of patients are acquired daily on a helical tomotherapy unit (TomoTherapy, Inc., Madison, WI). While these images are used primarily for patient alignment, they can also be used to recalculate the treatment plan for the patient anatomy of the day. The use of MVCT images for dose computations requires a reliable CT number to electron density calibration curve. In this work, we tested the stability of the MVCT numbers by determining the variation of this calibration with spatial arrangement of the phantom, time and MVCT acquisition parameters. The two calibration curves that represent the largest variations were applied to six clinical MVCT images for recalculations to test for dosimetric uncertainties. Among the six cases tested, the largest difference in any of the dosimetric endpoints was 3.1% but more typically the dosimetric endpoints varied by less than 2%. Using an average CT to electron density calibration and a thorax phantom, a series of end-to-end tests were run. Using a rigid phantom, recalculated dose volume histograms (DVHs) were compared with plan DVHs. Using a deformed phantom, recalculated point dose variations were compared with measurements. The MVCT field of view is limited and the image space outside this field of view can be filled in with information from the planning kVCT. This merging technique was tested for a rigid phantom. Finally, the influence of the MVCT slice thickness on the dose recalculation was investigated. The dosimetric differences observed in all phantom tests were within the range of dosimetric uncertainties observed due to variations in the calibration curve. The use of MVCT images allows the assessment of daily dose distributions with an accuracy that is similar to that of the initial kVCT dose calculation.

  12. Development and validation of the new ICNARC model for prediction of acute hospital mortality in adult critical care.

    PubMed

    Ferrando-Vivas, Paloma; Jones, Andrew; Rowan, Kathryn M; Harrison, David A

    2017-04-01

    To develop and validate an improved risk model to predict acute hospital mortality for admissions to adult critical care units in the UK. 155,239 admissions to 232 adult critical care units in England, Wales and Northern Ireland between January and December 2012 were used to develop a risk model from a set of 38 candidate predictors. The model was validated using 90,017 admissions between January and September 2013. The final model incorporated 15 physiological predictors (modelled with continuous nonlinear models), age, dependency prior to hospital admission, chronic liver disease, metastatic disease, haematological malignancy, CPR prior to admission, location prior to admission/urgency of admission, primary reason for admission and interaction terms. The model was well calibrated and outperformed the current ICNARC model on measures of discrimination (area under the receiver operating characteristic curve 0.885 versus 0.869) and model fit (Brier's score 0.108 versus 0.115). On average, the new model reclassified patients into more appropriate risk categories (net reclassification improvement 19.9; P<0.0001). The model performed well across patient subgroups and in specialist critical care units. The risk model developed in this study showed excellent discrimination and calibration and when validated on a different period of time and across different types of critical care unit. This in turn allows improved accuracy of comparisons between UK critical care providers. Copyright © 2016. Published by Elsevier Inc.

  13. Investigation of the UK37' vs. SST relationship for Atlantic Ocean suspended particulate alkenones: An alternative regression model and discussion of possible sampling bias

    NASA Astrophysics Data System (ADS)

    Gould, Jessica; Kienast, Markus; Dowd, Michael

    2017-05-01

    Alkenone unsaturation, expressed as the UK37' index, is closely related to growth temperature of prymnesiophytes, thus providing a reliable proxy to infer past sea surface temperatures (SSTs). Here we address two lingering uncertainties related to this SST proxy. First, calibration models developed for core-top sediments and those developed for surface suspended particulates organic material (SPOM) show systematic offsets, raising concerns regarding the transfer of the primary signal into the sedimentary record. Second, questions remain regarding changes in slope of the UK37' vs. growth temperature relationship at the temperature extremes. Based on (re)analysis of 31 new and 394 previously published SPOM UK37' data from the Atlantic Ocean, a new regression model to relate UK37' to SST is introduced; the Richards curve (Richards, 1959). This non-linear regression model provides a robust calibration of the UK37' vs. SST relationship for Atlantic SPOM samples and uniquely accounts for both the fact that the UK37' index is a proportion, and so must lie between 0 and 1, as well as for the observed reduction in slope at the warm and cold ends of the temperature range. As with prior fits of SPOM UK37' vs. SST, the Richards model is offset from traditional regression models of sedimentary UK37' vs. SST. We posit that (some of) this offset can be attributed to the seasonally and depth biased sampling of SPOM material.

  14. X-ray plane-wave diffraction effects in a crystal with third-order nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balyan, M. K., E-mail: mbalyan@ysu.am

    The two-wave dynamical diffraction in the Laue geometry has been theoretically considered for a plane X-ray wave in a crystal with a third-order nonlinear response to the external field. An analytical solution to the problem stated is found for certain diffraction conditions. A nonlinear pendulum effect is analyzed. The nonlinear extinction length is found to depend on the incident-wave intensity. A pendulum effect of a new type is revealed: the intensities of the transmitted and diffracted waves periodically depend on the incidentwave intensity at a fixed crystal thickness. The rocking curves and Borrmann nonlinear effect are numerically calculated.

  15. A scattering methodology for droplet sizing of e-cigarette aerosols.

    PubMed

    Pratte, Pascal; Cosandey, Stéphane; Goujon-Ginglinger, Catherine

    2016-10-01

    Knowledge of the droplet size distribution of inhalable aerosols is important to predict aerosol deposition yield at various respiratory tract locations in human. Optical methodologies are usually preferred over the multi-stage cascade impactor for high-throughput measurements of aerosol particle/droplet size distributions. Evaluate the Laser Aerosol Spectrometer technology based on Polystyrene Sphere Latex (PSL) calibration curve applied for the experimental determination of droplet size distributions in the diameter range typical of commercial e-cigarette aerosols (147-1361 nm). This calibration procedure was tested for a TSI Laser Aerosol Spectrometer (LAS) operating at a wavelength of 633 nm and assessed against model di-ethyl-hexyl-sebacat (DEHS) droplets and e-cigarette aerosols. The PSL size response was measured, and intra- and between-day standard deviations calculated. DEHS droplet sizes were underestimated by 15-20% by the LAS when the PSL calibration curve was used; however, the intra- and between-day relative standard deviations were < 3%. This bias is attributed to the fact that the index of refraction of PSL calibrated particles is different in comparison to test aerosols. This 15-20% does not include the droplet evaporation component, which may reduce droplet size prior a measurement is performed. Aerosol concentration was measured accurately with a maximum uncertainty of 20%. Count median diameters and mass median aerodynamic diameters of selected e-cigarette aerosols ranged from 130-191 nm to 225-293 nm, respectively, similar to published values. The LAS instrument can be used to measure e-cigarette aerosol droplet size distributions with a bias underestimating the expected value by 15-20% when using a precise PSL calibration curve. Controlled variability of DEHS size measurements can be achieved with the LAS system; however, this method can only be applied to test aerosols having a refractive index close to that of PSL particles used for calibration.

  16. A fast combination calibration of foreground and background for pipelined ADCs

    NASA Astrophysics Data System (ADS)

    Kexu, Sun; Lenian, He

    2012-06-01

    This paper describes a fast digital calibration scheme for pipelined analog-to-digital converters (ADCs). The proposed method corrects the nonlinearity caused by finite opamp gain and capacitor mismatch in multiplying digital-to-analog converters (MDACs). The considered calibration technique takes the advantages of both foreground and background calibration schemes. In this combination calibration algorithm, a novel parallel background calibration with signal-shifted correlation is proposed, and its calibration cycle is very short. The details of this technique are described in the example of a 14-bit 100 Msample/s pipelined ADC. The high convergence speed of this background calibration is achieved by three means. First, a modified 1.5-bit stage is proposed in order to allow the injection of a large pseudo-random dithering without missing code. Second, before correlating the signal, it is shifted according to the input signal so that the correlation error converges quickly. Finally, the front pipeline stages are calibrated simultaneously rather than stage by stage to reduce the calibration tracking constants. Simulation results confirm that the combination calibration has a fast startup process and a short background calibration cycle of 2 × 221 conversions.

  17. Solid laboratory calibration of a nonimaging spectroradiometer.

    PubMed

    Schaepman, M E; Dangel, S

    2000-07-20

    Field-based nonimaging spectroradiometers are often used in vicarious calibration experiments for airborne or spaceborne imaging spectrometers. The calibration uncertainties associated with these ground measurements contribute substantially to the overall modeling error in radiance- or reflectance-based vicarious calibration experiments. Because of limitations in the radiometric stability of compact field spectroradiometers, vicarious calibration experiments are based primarily on reflectance measurements rather than on radiance measurements. To characterize the overall uncertainty of radiance-based approaches and assess the sources of uncertainty, we carried out a full laboratory calibration. This laboratory calibration of a nonimaging spectroradiometer is based on a measurement plan targeted at achieving a

  18. Effective Desynchronization by Nonlinear Delayed Feedback

    NASA Astrophysics Data System (ADS)

    Popovych, Oleksandr V.; Hauptmann, Christian; Tass, Peter A.

    2005-04-01

    We show that nonlinear delayed feedback opens up novel means for the control of synchronization. In particular, we propose a demand-controlled method for powerful desynchronization, which does not require any time-consuming calibration. Our technique distinguishes itself by its robustness against variations of system parameters, even in strongly coupled ensembles of oscillators. We suggest our method for mild and effective deep brain stimulation in neurological diseases characterized by pathological cerebral synchronization.

  19. Resolving model parameter values from carbon and nitrogen stock measurements in a wide range of tropical mature forests using nonlinear inversion and regression trees

    Treesearch

    Shuguang Liua; Pamela Anderson; Guoyi Zhoud; Boone Kauffman; Flint Hughes; David Schimel; Vicente Watson; Joseph Tosi

    2008-01-01

    Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in...

  20. Measurement and models of bent KAP(001) crystal integrated reflectivity and resolution (invited)

    NASA Astrophysics Data System (ADS)

    Loisel, G. P.; Wu, M.; Stolte, W.; Kruschwitz, C.; Lake, P.; Dunham, G. S.; Bailey, J. E.; Rochau, G. A.

    2016-11-01

    The Advanced Light Source beamline-9.3.1 x-rays are used to calibrate the rocking curve of bent potassium acid phthalate (KAP) crystals in the 2.3-4.5 keV photon-energy range. Crystals are bent on a cylindrically convex substrate with a radius of curvature ranging from 2 to 9 in. and also including the flat case to observe the effect of bending on the KAP spectrometric properties. As the bending radius increases, the crystal reflectivity converges to the mosaic crystal response. The X-ray Oriented Programs (xop) multi-lamellar model of bent crystals is used to model the rocking curve of these crystals and the calibration data confirm that a single model is adequate to reproduce simultaneously all measured integrated reflectivities and rocking-curve FWHM for multiple radii of curvature in both 1st and 2nd order of diffraction.

  1. The Sloan Digital Sky Survey-II: Photometry and Supernova Ia Light Curves from the 2005 Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holtzman, Jon A.; /New Mexico State U.; Marriner, John

    2010-08-26

    We present ugriz light curves for 146 spectroscopically confirmed or spectroscopically probable Type Ia supernovae from the 2005 season of the SDSS-II Supernova survey. The light curves have been constructed using a photometric technique that we call scene modeling, which is described in detail here; the major feature is that supernova brightnesses are extracted from a stack of images without spatial resampling or convolution of the image data. This procedure produces accurate photometry along with accurate estimates of the statistical uncertainty, and can be used to derive photometry taken with multiple telescopes. We discuss various tests of this technique thatmore » demonstrate its capabilities. We also describe the methodology used for the calibration of the photometry, and present calibrated magnitudes and fluxes for all of the spectroscopic SNe Ia from the 2005 season.« less

  2. Measurement and models of bent KAP(001) crystal integrated reflectivity and resolution (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loisel, G. P., E-mail: gploise@sandia.gov; Wu, M.; Lake, P.

    2016-11-15

    The Advanced Light Source beamline-9.3.1 x-rays are used to calibrate the rocking curve of bent potassium acid phthalate (KAP) crystals in the 2.3-4.5 keV photon-energy range. Crystals are bent on a cylindrically convex substrate with a radius of curvature ranging from 2 to 9 in. and also including the flat case to observe the effect of bending on the KAP spectrometric properties. As the bending radius increases, the crystal reflectivity converges to the mosaic crystal response. The X-ray Oriented Programs (XOP) multi-lamellar model of bent crystals is used to model the rocking curve of these crystals and the calibration datamore » confirm that a single model is adequate to reproduce simultaneously all measured integrated reflectivities and rocking-curve FWHM for multiple radii of curvature in both 1st and 2nd order of diffraction.« less

  3. Use of laser ablation-inductively coupled plasma-time of flight-mass spectrometry to identify the elemental composition of vanilla and determine the geographic origin by discriminant function analysis.

    PubMed

    Hondrogiannis, Ellen M; Ehrlinger, Erin; Poplaski, Alyssa; Lisle, Meredith

    2013-11-27

    A total of 11 elements found in 25 vanilla samples from Uganda, Madagascar, Indonesia, and Papua New Guinea were measured by laser ablation-inductively coupled plasma-time-of-flight-mass spectrometry (LA-ICP-TOF-MS) for the purpose of collecting data that could be used to discriminate among the origins. Pellets were prepared of the samples, and elemental concentrations were obtained on the basis of external calibration curves created using five National Institute of Standards and Technology (NIST) standards and one Chinese standard with (13)C internal standardization. These curves were validated using NIST 1573a (tomato leaves) as a check standard. Discriminant analysis was used to successfully classify the vanilla samples by their origin. Our method illustrates the feasibility of using LA-ICP-TOF-MS with an external calibration curve for high-throughput screening of spice screening analysis.

  4. Assessing and calibrating the ATR-FTIR approach as a carbonate rock characterization tool

    NASA Astrophysics Data System (ADS)

    Henry, Delano G.; Watson, Jonathan S.; John, Cédric M.

    2017-01-01

    ATR-FTIR (attenuated total reflectance Fourier transform infrared) spectroscopy can be used as a rapid and economical tool for qualitative identification of carbonates, calcium sulphates, oxides and silicates, as well as quantitatively estimating the concentration of minerals. Over 200 powdered samples with known concentrations of two, three, four and five phase mixtures were made, then a suite of calibration curves were derived that can be used to quantify the minerals. The calibration curves in this study have an R2 that range from 0.93-0.99, a RMSE (root mean square error) of 1-5 wt.% and a maximum error of 3-10 wt.%. The calibration curves were used on 35 geological samples that have previously been studied using XRD (X-ray diffraction). The identification of the minerals using ATR-FTIR is comparable with XRD and the quantitative results have a RMSD (root mean square deviation) of 14% and 12% for calcite and dolomite respectively when compared to XRD results. ATR-FTIR is a rapid technique (identification and quantification takes < 5 min) that involves virtually no cost if the machine is available. It is a common tool in most analytical laboratories, but it also has the potential to be deployed on a rig for real-time data acquisition of the mineralogy of cores and rock chips at the surface as there is no need for special sample preparation, rapid data collection and easy analysis.

  5. Risk scores for outcome in bacterial meningitis: Systematic review and external validation study.

    PubMed

    Bijlsma, Merijn W; Brouwer, Matthijs C; Bossuyt, Patrick M; Heymans, Martijn W; van der Ende, Arie; Tanck, Michael W T; van de Beek, Diederik

    2016-11-01

    To perform an external validation study of risk scores, identified through a systematic review, predicting outcome in community-acquired bacterial meningitis. MEDLINE and EMBASE were searched for articles published between January 1960 and August 2014. Performance was evaluated in 2108 episodes of adult community-acquired bacterial meningitis from two nationwide prospective cohort studies by the area under the receiver operating characteristic curve (AUC), the calibration curve, calibration slope or Hosmer-Lemeshow test, and the distribution of calculated risks. Nine risk scores were identified predicting death, neurological deficit or death, or unfavorable outcome at discharge in bacterial meningitis, pneumococcal meningitis and invasive meningococcal disease. Most studies had shortcomings in design, analyses, and reporting. Evaluation showed AUCs of 0.59 (0.57-0.61) and 0.74 (0.71-0.76) in bacterial meningitis, 0.67 (0.64-0.70) in pneumococcal meningitis, and 0.81 (0.73-0.90), 0.82 (0.74-0.91), 0.84 (0.75-0.93), 0.84 (0.76-0.93), 0.85 (0.75-0.95), and 0.90 (0.83-0.98) in meningococcal meningitis. Calibration curves showed adequate agreement between predicted and observed outcomes for four scores, but statistical tests indicated poor calibration of all risk scores. One score could be recommended for the interpretation and design of bacterial meningitis studies. None of the existing scores performed well enough to recommend routine use in individual patient management. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  6. Flow of nanofluid by nonlinear stretching velocity

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed; Ahmad, Bashir

    2018-03-01

    Main objective in this article is to model and analyze the nanofluid flow induced by curved surface with nonlinear stretching velocity. Nanofluid comprises water and silver. Governing problem is solved by using homotopy analysis method (HAM). Induced magnetic field for low magnetic Reynolds number is not entertained. Development of convergent series solutions for velocity and skin friction coefficient is successfully made. Pressure in the boundary layer flow by curved stretching surface cannot be ignored. It is found that magnitude of power-law index parameter increases for pressure distibutions. Magnitude of radius of curvature reduces for pressure field while opposite trend can be observed for velocity.

  7. Multi-q pattern classification of polarization curves

    NASA Astrophysics Data System (ADS)

    Fabbri, Ricardo; Bastos, Ivan N.; Neto, Francisco D. Moura; Lopes, Francisco J. P.; Gonçalves, Wesley N.; Bruno, Odemir M.

    2014-02-01

    Several experimental measurements are expressed in the form of one-dimensional profiles, for which there is a scarcity of methodologies able to classify the pertinence of a given result to a specific group. The polarization curves that evaluate the corrosion kinetics of electrodes in corrosive media are applications where the behavior is chiefly analyzed from profiles. Polarization curves are indeed a classic method to determine the global kinetics of metallic electrodes, but the strong nonlinearity from different metals and alloys can overlap and the discrimination becomes a challenging problem. Moreover, even finding a typical curve from replicated tests requires subjective judgment. In this paper, we used the so-called multi-q approach based on the Tsallis statistics in a classification engine to separate the multiple polarization curve profiles of two stainless steels. We collected 48 experimental polarization curves in an aqueous chloride medium of two stainless steel types, with different resistance against localized corrosion. Multi-q pattern analysis was then carried out on a wide potential range, from cathodic up to anodic regions. An excellent classification rate was obtained, at a success rate of 90%, 80%, and 83% for low (cathodic), high (anodic), and both potential ranges, respectively, using only 2% of the original profile data. These results show the potential of the proposed approach towards efficient, robust, systematic and automatic classification of highly nonlinear profile curves.

  8. Preliminary analysis of the effects of non-linear creep and flange contact forces on truck performance in curves

    DOT National Transportation Integrated Search

    1975-05-31

    Prediction of wheel displacements and wheel-rail forces is a prerequisite to the evaluation of the curving performance of rail vehicles. This information provides part of the basis for the rational design of wheels and suspension components, for esta...

  9. Self-Action of Second Harmonic Generation and Longitudinal Temperature Gradient in Nonlinear-Optical Crystals

    NASA Astrophysics Data System (ADS)

    Baranov, A. I.; Konyashkin, A. V.; Ryabushkin, O. A.

    2015-09-01

    Model of second harmonic generation with thermal self-action was developed. Second harmonic generation temperature phase matching curves were measured and calculated for periodically polled lithium niobate crystal. Both experimental and calculated data show asymmetrical shift of temperature tuning curves with pump power.

  10. Nonlinear Acoustic Metamaterials for Sound Attenuation Applications

    DTIC Science & Technology

    2011-03-16

    elastic guides, which are discretized into Bernoulli -Euler beam elements [29]. We first describe the equations of particles’ motion in the DE model...to 613 N in the curved one [see Fig. 15(b)]. Overall, the area under the force-time curve, which corresponds to the amount of momentum transferred

  11. Nonlinear dynamics near the stability margin in rotating pipe flow

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Leibovich, S.

    1991-01-01

    The nonlinear evolution of marginally unstable wave packets in rotating pipe flow is studied. These flows depend on two control parameters, which may be taken to be the axial Reynolds number R and a Rossby number, q. Marginal stability is realized on a curve in the (R, q)-plane, and the entire marginal stability boundary is explored. As the flow passes through any point on the marginal stability curve, it undergoes a supercritical Hopf bifurcation and the steady base flow is replaced by a traveling wave. The envelope of the wave system is governed by a complex Ginzburg-Landau equation. The Ginzburg-Landau equation admits Stokes waves, which correspond to standing modulations of the linear traveling wavetrain, as well as traveling wave modulations of the linear wavetrain. Bands of wavenumbers are identified in which the nonlinear modulated waves are subject to a sideband instability.

  12. The investigation of social networks based on multi-component random graphs

    NASA Astrophysics Data System (ADS)

    Zadorozhnyi, V. N.; Yudin, E. B.

    2018-01-01

    The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.

  13. Optimization, evaluation and calibration of a cross-strip DOI detector

    NASA Astrophysics Data System (ADS)

    Schmidt, F. P.; Kolb, A.; Pichler, B. J.

    2018-02-01

    This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12  ×  12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.

  14. Optimization, evaluation and calibration of a cross-strip DOI detector.

    PubMed

    Schmidt, F P; Kolb, A; Pichler, B J

    2018-02-20

    This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12  ×  12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.

  15. A multimethod Global Sensitivity Analysis to aid the calibration of geomechanical models via time-lapse seismic data

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.

    2018-03-01

    Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden traveltime to a much greater extent. Therefore, reservoir properties must be known to a suitable degree of accuracy before the calibration of the overburden can be considered.

  16. When high working memory capacity is and is not beneficial for predicting nonlinear processes.

    PubMed

    Fischer, Helen; Holt, Daniel V

    2017-04-01

    Predicting the development of dynamic processes is vital in many areas of life. Previous findings are inconclusive as to whether higher working memory capacity (WMC) is always associated with using more accurate prediction strategies, or whether higher WMC can also be associated with using overly complex strategies that do not improve accuracy. In this study, participants predicted a range of systematically varied nonlinear processes based on exponential functions where prediction accuracy could or could not be enhanced using well-calibrated rules. Results indicate that higher WMC participants seem to rely more on well-calibrated strategies, leading to more accurate predictions for processes with highly nonlinear trajectories in the prediction region. Predictions of lower WMC participants, in contrast, point toward an increased use of simple exemplar-based prediction strategies, which perform just as well as more complex strategies when the prediction region is approximately linear. These results imply that with respect to predicting dynamic processes, working memory capacity limits are not generally a strength or a weakness, but that this depends on the process to be predicted.

  17. Social Contagion, Adolescent Sexual Behavior, and Pregnancy: A Nonlinear Dynamic EMOSA Model.

    ERIC Educational Resources Information Center

    Rodgers, Joseph Lee; Rowe, David C.; Buster, Maury

    1998-01-01

    Expands an existing nonlinear dynamic epidemic model of onset of social activities (EMOSA), motivated by social contagion theory, to quantify the likelihood of pregnancy for adolescent girls of different sexuality statuses. Compares five sexuality/pregnancy models to explain variance in national prevalence curves. Finds that adolescent girls have…

  18. Modeling Nonlinear Change via Latent Change and Latent Acceleration Frameworks: Examining Velocity and Acceleration of Growth Trajectories

    ERIC Educational Resources Information Center

    Grimm, Kevin; Zhang, Zhiyong; Hamagami, Fumiaki; Mazzocco, Michele

    2013-01-01

    We propose the use of the latent change and latent acceleration frameworks for modeling nonlinear growth in structural equation models. Moving to these frameworks allows for the direct identification of "rates of change" and "acceleration" in latent growth curves--information available indirectly through traditional growth…

  19. A nonlinear beam model to describe the postbuckling of wide neo-Hookean beams

    NASA Astrophysics Data System (ADS)

    Lubbers, Luuk A.; van Hecke, Martin; Coulais, Corentin

    2017-09-01

    Wide beams can exhibit subcritical buckling, i.e. the slope of the force-displacement curve can become negative in the postbuckling regime. In this paper, we capture this intriguing behaviour by constructing a 1D nonlinear beam model, where the central ingredient is the nonlinearity in the stress-strain relation of the beams constitutive material. First, we present experimental and numerical evidence of a transition to subcritical buckling for wide neo-Hookean hyperelastic beams, when their width-to-length ratio exceeds a critical value of 12%. Second, we construct an effective 1D energy density by combining the Mindlin-Reissner kinematics with a nonlinearity in the stress-strain relation. Finally, we establish and solve the governing beam equations to analytically determine the slope of the force-displacement curve in the postbuckling regime. We find, without any adjustable parameters, excellent agreement between the 1D theory, experiments and simulations. Our work extends the understanding of the postbuckling of structures made of wide elastic beams and opens up avenues for the reverse-engineering of instabilities in soft and metamaterials.

  20. Nonlinear flutter analysis of composite panels

    NASA Astrophysics Data System (ADS)

    An, Xiaomin; Wang, Yan

    2018-05-01

    Nonlinear panel flutter is an interesting subject of fluid-structure interaction. In this paper, nonlinear flutter characteristics of curved composite panels are studied in very low supersonic flow. The composite panel with geometric nonlinearity is modeled by a nonlinear finite element method; and the responses are computed by the nonlinear Newmark algorithm. An unsteady aerodynamic solver, which contains a flux splitting scheme and dual time marching technology, is employed in calculating the unsteady pressure of the motion of the panel. Based on a half-step staggered coupled solution, the aeroelastic responses of two composite panels with different radius of R = 5 and R = 2.5 are computed and compared with each other at different dynamic pressure for Ma = 1.05. The nonlinear flutter characteristics comprising limited cycle oscillations and chaos are analyzed and discussed.

  1. Application of nonlinear-regression methods to a ground-water flow model of the Albuquerque Basin, New Mexico

    USGS Publications Warehouse

    Tiedeman, C.R.; Kernodle, J.M.; McAda, D.P.

    1998-01-01

    This report documents the application of nonlinear-regression methods to a numerical model of ground-water flow in the Albuquerque Basin, New Mexico. In the Albuquerque Basin, ground water is the primary source for most water uses. Ground-water withdrawal has steadily increased since the 1940's, resulting in large declines in water levels in the Albuquerque area. A ground-water flow model was developed in 1994 and revised and updated in 1995 for the purpose of managing basin ground- water resources. In the work presented here, nonlinear-regression methods were applied to a modified version of the previous flow model. Goals of this work were to use regression methods to calibrate the model with each of six different configurations of the basin subsurface and to assess and compare optimal parameter estimates, model fit, and model error among the resulting calibrations. The Albuquerque Basin is one in a series of north trending structural basins within the Rio Grande Rift, a region of Cenozoic crustal extension. Mountains, uplifts, and fault zones bound the basin, and rock units within the basin include pre-Santa Fe Group deposits, Tertiary Santa Fe Group basin fill, and post-Santa Fe Group volcanics and sediments. The Santa Fe Group is greater than 14,000 feet (ft) thick in the central part of the basin. During deposition of the Santa Fe Group, crustal extension resulted in development of north trending normal faults with vertical displacements of as much as 30,000 ft. Ground-water flow in the Albuquerque Basin occurs primarily in the Santa Fe Group and post-Santa Fe Group deposits. Water flows between the ground-water system and surface-water bodies in the inner valley of the basin, where the Rio Grande, a network of interconnected canals and drains, and Cochiti Reservoir are located. Recharge to the ground-water flow system occurs as infiltration of precipitation along mountain fronts and infiltration of stream water along tributaries to the Rio Grande; subsurface flow from adjacent regions; irrigation and septic field seepage; and leakage through the Rio Grande, canal, and Cochiti Reservoir beds. Ground water is discharged from the basin by withdrawal; evapotranspiration; subsurface flow; and flow to the Rio Grande, canals, and drains. The transient, three-dimensional numerical model of ground-water flow to which nonlinear-regression methods were applied simulates flow in the Albuquerque Basin from 1900 to March 1995. Six different basin subsurface configurations are considered in the model. These configurations are designed to test the effects of (1) varying the simulated basin thickness, (2) including a hypothesized hydrogeologic unit with large hydraulic conductivity in the western part of the basin (the west basin high-K zone), and (3) substantially lowering the simulated hydraulic conductivity of a fault in the western part of the basin (the low-K fault zone). The model with each of the subsurface configurations was calibrated using a nonlinear least- squares regression technique. The calibration data set includes 802 hydraulic-head measurements that provide broad spatial and temporal coverage of basin conditions, and one measurement of net flow from the Rio Grande and drains to the ground-water system in the Albuquerque area. Data are weighted on the basis of estimates of the standard deviations of measurement errors. The 10 to 12 parameters to which the calibration data as a whole are generally most sensitive were estimated by nonlinear regression, whereas the remaining model parameter values were specified. Results of model calibration indicate that the optimal parameter estimates as a whole are most reasonable in calibrations of the model with with configurations 3 (which contains 1,600-ft-thick basin deposits and the west basin high-K zone), 4 (which contains 5,000-ft-thick basin de

  2. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  3. Effect of laser irradiance and wavelength on the analysis of gold- and silver-bearing minerals with laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Díaz, Daniel; Molina, Alejandro; Hahn, David

    2018-07-01

    The influence of laser irradiance and wavelength on the analysis of gold and silver in ore and surrogate samples with laser-induced breakdown spectroscopy (LIBS) was evaluated. Gold-doped mineral samples (surrogates) and ore samples containing naturally-occurring gold and silver were analyzed with LIBS using 1064 and 355 nm laser wavelengths at irradiances from 0.36 × 109 to 19.9 × 109 W/cm2 and 0.97 × 109 to 4.3 × 109 W/cm2, respectively. The LIBS net, background and signal-to-background signals were analyzed. For all irradiances, wavelengths, samples and analytes the calibration curves behaved linearly for concentrations from 1 to 9 μg/g gold (surrogate samples) and 0.7 to 47.0 μg/g silver (ore samples). However, it was not possible to prepare calibration curves for gold-bearing ore samples (at any concentration) nor for gold-doped surrogate samples with gold concentrations below 1 μg/g. Calibration curve parameters for gold-doped surrogate samples were statistically invariant at 1064 and 355 nm. Contrary, the Ag-ore analyte showed higher emission intensity at 1064 nm, but the signal-to-background normalization reduced the effect of laser wavelength of silver calibration plots. The gold-doped calibration curve metrics improved at higher laser irradiance, but that did not translate into lower limits of detection. While coefficients of determination (R2) and limits of detection did not vary significantly with laser wavelength, the LIBS repeatability at 355 nm improved up to a 50% with respect to that at 1064 nm. Plasma diagnostics by the Boltzmann and Stark broadening methods showed that the plasma temperature and electron density did not follow a specific trend as the wavelength changed for the delay and gate times used. This research presents supporting evidence that the LIBS discrete sampling features combined with the discrete and random distribution of gold in minerals hinder gold analysis by LIBS in ore samples; however, the use of higher laser irradiances at 1064 nm increased the probability of sampling and detecting naturally-occurring gold.

  4. Effects of uncertainties in hydrological modelling. A case study of a mountainous catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur

    2016-05-01

    In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.

  5. 40 CFR Appendix A to Subpart I of... - Alternative Procedures for Measuring Point-of-Use Abatement Device Destruction or Removal Efficiency

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... native AOI concentration (ppm) of the effluent during stable conditions. (14) Post-test calibration. At... or removal efficiencies must be determined while etching a substrate (product, dummy, or test). For... curves for the subsequent destruction or removal efficiency tests. (8) Mass location calibration. A...

  6. Multivariate curve resolution-assisted determination of pseudoephedrine and methamphetamine by HPLC-DAD in water samples.

    PubMed

    Vosough, Maryam; Mohamedian, Hadi; Salemi, Amir; Baheri, Tahmineh

    2015-02-01

    In the present study, a simple strategy based on solid-phase extraction (SPE) with a cation exchange sorbent (Finisterre SCX) followed by fast high-performance liquid chromatography (HPLC) with diode array detection coupled with chemometrics tools has been proposed for the determination of methamphetamine and pseudoephedrine in ground water and river water. At first, the HPLC and SPE conditions were optimized and the analytical performance of the method was determined. In the case of ground water, determination of analytes was successfully performed through univariate calibration curves. For river water sample, multivariate curve resolution and alternating least squares was implemented and the second-order advantage was achieved in samples containing uncalibrated interferences and uncorrected background signals. The calibration curves showed good linearity (r(2) > 0.994).The limits of detection for pseudoephedrine and methamphetamine were 0.06 and 0.08 μg/L and the average recovery values were 104.7 and 102.3% in river water, respectively. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Everhart, Joel L.

    1987-01-01

    This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.

  8. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.

  9. Biological dosimetry of ionizing radiation: Evaluation of the dose with cytogenetic methodologies by the construction of calibration curves

    NASA Astrophysics Data System (ADS)

    Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia

    2016-09-01

    In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining PCC of lymphocytes and CHO cells with FISH using PNA probes after 10 h and 24 h after irradiation, and, finally, calibration data of excess PCC fragments (Giemsa) to be used if human blood is available immediately after irradiation or within 24 h.

  10. Dependence of magnetic permeability on residual stresses in alloyed steels

    NASA Astrophysics Data System (ADS)

    Hristoforou, E.; Ktena, A.; Vourna, P.; Argiris, K.

    2018-04-01

    A method for the monitoring of residual stress distribution in steels has been developed based on non-destructive surface magnetic permeability measurements. In order to investigate the potential utilization of the magnetic method in evaluating residual stresses, the magnetic calibration curves of various ferromagnetic alloyed steels' grade (AISI 4140, TRIP and Duplex) were examined. X-Ray diffraction technique was used for determining surface residual stress values. The overall measurement results have shown that the residual stress determined by the magnetic method was in good agreement with the diffraction results. Further experimental investigations are required to validate the preliminary results and to verify the presence of a unique normalized magnetic stress calibration curve.

  11. Direct Estimate of Cocoa Powder Content in Cakes by Colorimetry and Photoacoustic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Dóka, O.; Bicanic, D.; Kulcsár, R.

    2014-12-01

    Cocoa is a very important ingredient in the food industry and largely consumed worldwide. In this investigation, colorimetry and photoacoustic spectroscopy were used to directly assess the content of cocoa powder in cakes; both methods provided satisfactory results. The calibration curve was constructed using a series of home-made cakes containing varying amount of cocoa powder. Then, at a later stage, the same calibration curve was used to quantify the cocoa content of several commercially available cakes. For self-made cakes, the relationship between the PAS signal and the content of cocoa powder was linear while a quadratic dependence was obtained for the colorimetric index (brightness) and total color difference ().

  12. Antenna Calibration and Measurement Equipment

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  13. Optical Mass Displacement Tracking: A simplified field calibration method for the electro-mechanical seismometer.

    NASA Astrophysics Data System (ADS)

    Burk, D. R.; Mackey, K. G.; Hartse, H. E.

    2016-12-01

    We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in regional networks throughout Russia and in Central Asia.

  14. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    PubMed

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Auto-calibration of GF-1 WFV images using flat terrain

    NASA Astrophysics Data System (ADS)

    Zhang, Guo; Xu, Kai; Huang, Wenchao

    2017-12-01

    Four wide field view (WFV) cameras with 16-m multispectral medium-resolution and a combined swath of 800 km are onboard the Gaofen-1 (GF-1) satellite, which can increase the revisit frequency to less than 4 days and enable large-scale land monitoring. The detection and elimination of WFV camera distortions is key for subsequent applications. Due to the wide swath of WFV images, geometric calibration using either conventional methods based on the ground control field (GCF) or GCF independent methods is problematic. This is predominantly because current GCFs in China fail to cover the whole WFV image and most GCF independent methods are used for close-range photogrammetry or computer vision fields. This study proposes an auto-calibration method using flat terrain to detect nonlinear distortions of GF-1 WFV images. First, a classic geometric calibration model is built for the GF1 WFV camera, and at least two images with an overlap area that cover flat terrain are collected, then the elevation residuals between the real elevation and that calculated by forward intersection are used to solve nonlinear distortion parameters in WFV images. Experiments demonstrate that the orientation accuracy of the proposed method evaluated by GCF CPs is within 0.6 pixel, and residual errors manifest as random errors. Validation using Google Earth CPs further proves the effectiveness of auto-calibration, and the whole scene is undistorted compared to not using calibration parameters. The orientation accuracy of the proposed method and the GCF method is compared. The maximum difference is approximately 0.3 pixel, and the factors behind this discrepancy are analyzed. Generally, this method can effectively compensate for distortions in the GF-1 WFV camera.

  16. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  17. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  18. Enhanced nonlinear current-voltage behavior in Au nanoparticle dispersed CaCu 3 Ti 4 O 12 composite films

    NASA Astrophysics Data System (ADS)

    Chen, Cong; Wang, Can; Ning, Tingyin; Lu, Heng; Zhou, Yueliang; Ming, Hai; Wang, Pei; Zhang, Dongxiang; Yang, Guozhen

    2011-10-01

    An enhanced nonlinear current-voltage behavior has been observed in Au nanoparticle dispersed CaCu 3Ti 4O 12 composite films. The double Schottky barrier model is used to explain the enhanced nonlinearity in I-V curves. According to the energy-band model and fitting result, the nonlinearity in Au: CCTO film is mainly governed by thermionic emission in the reverse-biased Schottky barrier. This result not only supports the mechanism of double Schottky barrier in CCTO, but also indicates that the nonlinearity of current-voltage behavior could be improved in nanometal composite films, which has great significance for the resistance switching devices.

  19. Fourier imaging of non-linear structure formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important,more » and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.« less

  20. Data-Driven Method to Estimate Nonlinear Chemical Equivalence.

    PubMed

    Mayo, Michael; Collier, Zachary A; Winton, Corey; Chappell, Mark A

    2015-01-01

    There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of "equivalency factors," which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or "biphasic," responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are "parallel," which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach.

  1. Investigation of Heat Transfer in Straight and Curved Rectangular Ducts.

    DTIC Science & Technology

    1980-09-01

    theoretical explanation of the heat transfer effects required that all non-linear terms be re- tained in the flow equations. R. Kahawita and R...112, February 1370. 2’. Kahawita , R. and Meroney, R., "The Inffluence of Heating on the Stability of Laminar Boundary Layers Along Con- cave Curved

  2. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  3. [Evaluation of uncertainty for determination of tin and its compounds in air of workplace by flame atomic absorption spectrometry].

    PubMed

    Wei, Qiuning; Wei, Yuan; Liu, Fangfang; Ding, Yalei

    2015-10-01

    To investigate the method for uncertainty evaluation of determination of tin and its compounds in the air of workplace by flame atomic absorption spectrometry. The national occupational health standards, GBZ/T160.28-2004 and JJF1059-1999, were used to build a mathematical model of determination of tin and its compounds in the air of workplace and to calculate the components of uncertainty. In determination of tin and its compounds in the air of workplace using flame atomic absorption spectrometry, the uncertainty for the concentration of the standard solution, atomic absorption spectrophotometer, sample digestion, parallel determination, least square fitting of the calibration curve, and sample collection was 0.436%, 0.13%, 1.07%, 1.65%, 3.05%, and 2.89%, respectively. The combined uncertainty was 9.3%.The concentration of tin in the test sample was 0.132 mg/m³, and the expanded uncertainty for the measurement was 0.012 mg/m³ (K=2). The dominant uncertainty for determination of tin and its compounds in the air of workplace comes from least squares fitting of the calibration curve and sample collection. Quality control should be improved in the process of calibration curve fitting and sample collection.

  4. Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys

    NASA Astrophysics Data System (ADS)

    Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.

    2016-08-01

    Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.

  5. Realization of the Gallium Triple Point at NMIJ/AIST

    NASA Astrophysics Data System (ADS)

    Nakano, T.; Tamura, O.; Sakurai, H.

    2008-02-01

    The triple point of gallium has been realized by a calorimetric method using capsule-type standard platinum resistance thermometers (CSPRTs) and a small glass cell containing about 97 mmol (6.8 g) of gallium with a nominal purity of 99.99999%. The melting curve shows a very flat and relatively linear dependence on 1/ F in the region from 1/ F = 1 to 1/ F = 20 with a narrow width of the melting curve within 0.1 mK. Also, a large gallium triple-point cell was fabricated for the calibration of client-owned CSPRTs. The gallium triple-point cell consists of a PTFE crucible and a PTFE cap with a re-entrant well and a small vent. The PTFE cell contains 780 g of gallium from the same source as used for the small glass cell. The PTFE cell is completely covered by a stainless-steel jacket with a valve to enable evacuation of the cell. The melting curve of the large cell shows a flat plateau that remains within 0.03 mK over 10 days and that is reproducible within 0.05 mK over 8 months. The calibrated value of a CSPRT obtained using the large cell agrees with that obtained using the small glass cell within the uncertainties of the calibrations.

  6. IN-SITU IONIC CHEMICAL ANALYSIS OF FRESH WATER VIA A NOVEL COMBINED MULTI-SENSOR / SIGNAL PROCESSING ARCHITECTURE

    NASA Astrophysics Data System (ADS)

    Mueller, A. V.; Hemond, H.

    2009-12-01

    The capability for comprehensive, real-time, in-situ characterization of the chemical constituents of natural waters is a powerful tool for the advancement of the ecological and geochemical sciences, e.g. by facilitating rapid high-resolution adaptive sampling campaigns and avoiding the potential errors and high costs related to traditional grab sample collection, transportation and analysis. Portable field-ready instrumentation also promotes the goals of large-scale monitoring networks, such as CUASHI and WATERS, without the financial and human resources overhead required for traditional sampling at this scale. Problems of environmental remediation and monitoring of industrial waste waters would additionally benefit from such instrumental capacity. In-situ measurement of all major ions contributing to the charge makeup of natural fresh water is thus pursued via a combined multi-sensor/multivariate signal processing architecture. The instrument is based primarily on commercial electrochemical sensors, e.g. ion selective electrodes (ISEs) and ion selective field-effect transistors (ISFETs), to promote low cost as well as easy maintenance and reproduction,. The system employs a novel architecture of multivariate signal processing to extract accurate information from in-situ data streams via an "unmixing" process that accounts for sensor non-linearities at low concentrations, as well as sensor cross-reactivities. Conductivity, charge neutrality and temperature are applied as additional mathematical constraints on the chemical state of the system. Including such non-ionic information assists in obtaining accurate and useful calibrations even in the non-linear portion of the sensor response curves, and measurements can be made without the traditionally-required standard additions or ionic strength adjustment. Initial work demonstrates the effectiveness of this methodology at predicting inorganic cations (Na+, NH4+, H+, Ca2+, and K+) in a simplified system containing only a single anion (Cl-) in addition to hydroxide, thus allowing charge neutrality to be easily and explicitly invoked. Calibration of every probe relative to each of the five cations present is undertaken, and resulting curves are used to create a representative environmental data set based on USGS data for New England waters. Signal processing methodologies, specifically artificial neural networks (ANNs), are extended to use a feedback architecture based on conductivity measurements and charge neutrality calculations. The algorithms are then tuned to optimize performance of the algorithm at predicting actual concentrations from these simulated signals. Results are compared to use of component probes as stand-alone sensors. Future extension of this instrument for multiple anions (including carbonate and bicarbonate, nitrate, and sulfate) will ultimately provide rapid, accurate field measurements of the entire charge balance of natural waters at high resolution, improving sampling abilities while reducing costs and errors related to transport and analysis of grab samples.

  7. Exponential Decay Nonlinear Regression Analysis of Patient Survival Curves: Preliminary Assessment in Non-Small Cell Lung Cancer

    PubMed Central

    Stewart, David J.; Behrens, Carmen; Roth, Jack; Wistuba, Ignacio I.

    2010-01-01

    Background For processes that follow first order kinetics, exponential decay nonlinear regression analysis (EDNRA) may delineate curve characteristics and suggest processes affecting curve shape. We conducted a preliminary feasibility assessment of EDNRA of patient survival curves. Methods EDNRA was performed on Kaplan-Meier overall survival (OS) and time-to-relapse (TTR) curves for 323 patients with resected NSCLC and on OS and progression-free survival (PFS) curves from selected publications. Results and Conclusions In our resected patients, TTR curves were triphasic with a “cured” fraction of 60.7% (half-life [t1/2] >100,000 months), a rapidly-relapsing group (7.4%, t1/2=5.9 months) and a slowly-relapsing group (31.9%, t1/2=23.6 months). OS was uniphasic (t1/2=74.3 months), suggesting an impact of co-morbidities; hence, tumor molecular characteristics would more likely predict TTR than OS. Of 172 published curves analyzed, 72 (42%) were uniphasic, 92 (53%) were biphasic, 8 (5%) were triphasic. With first-line chemotherapy in advanced NSCLC, 87.5% of curves from 2-3 drug regimens were uniphasic vs only 20% of those with best supportive care or 1 drug (p<0.001). 54% of curves from 2-3 drug regimens had convex rapid-decay phases vs 0% with fewer agents (p<0.001). Curve convexities suggest that discontinuing chemotherapy after 3-6 cycles “synchronizes” patient progression and death. With postoperative adjuvant chemotherapy, the PFS rapid-decay phase accounted for a smaller proportion of the population than in controls (p=0.02) with no significant difference in rapid-decay t1/2, suggesting adjuvant chemotherapy may move a subpopulation of patients with sensitive tumors from the relapsing group to the cured group, with minimal impact on time to relapse for a larger group of patients with resistant tumors. In untreated patients, the proportion of patients in the rapid-decay phase increased (p=0.04) while rapid-decay t1/2 decreased (p=0.0004) with increasing stage, suggesting that higher stage may be associated with tumor cells that both grow more rapidly and have a higher probability of surviving metastatic processes than in early stage tumors. This preliminary assessment of EDNRA suggests that it may be worth exploring this approach further using more sophisticated, statistically rigorous nonlinear modelling approaches. Using such approaches to supplement standard survival analyses could suggest or support specific testable hypotheses. PMID:20627364

  8. Nonlinear radiative heat transfer and Hall effects on a viscous fluid in a semi-porous curved channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, Z.; Naveed, M., E-mail: rana.m.naveed@gmail.com; Sajid, M.

    In this paper, effects of Hall currents and nonlinear radiative heat transfer in a viscous fluid passing through a semi-porous curved channel coiled in a circle of radius R are analyzed. A curvilinear coordinate system is used to develop the mathematical model of the considered problem in the form partial differential equations. Similarity solutions of the governing boundary value problems are obtained numerically using shooting method. The results are also validated with the well-known finite difference technique known as the Keller-Box method. The analysis of the involved pertinent parameters on the velocity and temperature distributions is presented through graphs andmore » tables.« less

  9. On the nonlinear interaction of Goertler vortices and Tollmien-Schlichting waves in curved channel flows at finite Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Daudpota, Q. Isa; Zang, Thomas A.; Hall, Philip

    1988-01-01

    The flow in a two-dimensional curved channel driven by an azimuthal pressure gradient can become linearly unstable due to axisymmetric perturbations and/or nonaxisymmetric perturbations depending on the curvature of the channel and the Reynolds number. For a particular small value of curvature, the critical neighborhood of this curvature value and critical Reynolds number, nonlinear interactions occur between these perturbations. The Stuart-Watson approach is used to derive two coupled Landau equations for the amplitudes of these perturbations. The stability of the various possible states of these perturbations is shown through bifurcation diagrams. Emphasis is given to those cases which have relevance to external flows.

  10. On the nonlinear interaction of Gortler vortices and Tollmien-Schlichting waves in curved channel flows at finite Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Daudpota, Q. Isa; Hall, Philip; Zang, Thomas A.

    1987-01-01

    The flow in a two-dimensional curved channel driven by an azimuthal pressure gradient can become linearly unstable due to axisymmetric perturbations and/or nonaxisymmetric perturbations depending on the curvature of the channel and the Reynolds number. For a particular small value of curvature, the critical neighborhood of this curvature value and critical Reynolds number, nonlinear interactions occur between these perturbations. The Stuart-Watson approach is used to derive two coupled Landau equations for the amplitudes of these perturbations. The stability of the various possible states of these perturbations is shown through bifurcation diagrams. Emphasis is given to those cases which have relevance to external flows.

  11. The complementary relationship (CR) approach aids evapotranspiration estimation in the data scarce region of Tibetan Plateau: symmetric and asymmetric perspectives

    NASA Astrophysics Data System (ADS)

    Ma, N.; Zhang, Y.; Szilagyi, J.; Xu, C. Y.

    2015-12-01

    While the land surface latent and sensible heat release in the Tibetan Plateau (TP) could greatly influence the Asian monsoon circulation, the actual evapotranspiration (ETa) information in the TP has been largely hindered by its extremely sparse ground observation network. Thus the complementary relationship (CR) theory lends great potential in estimating the ETa since it relies on solely routine meteorological observations. With the in-situ energy/water flux observation over the highest semiarid alpine steppe in the TP, the modifications of specific components within the CR were first implemented. We found that the symmetry of the CR could be achieved for dry regions of TP when (i) the Priestley-Taylor coefficient, (ii) the slope of the saturation vapor pressure curve and (iii) the wind function were locally calibrated by using the ETa observations in wet days, an estimate of the wet surface temperature and the Monin-Obukhov Similarity (MOS) theory, respectively. In this way, the error of the simulated ETa by the symmetric AA model could be decreased to a large extent. Besides, the asymmetric CR was confirmed in TP when the D20 above-ground and/or E601B sunken pan evaporation (Epan) were used as a proxy of the ETp. Thus daily ETa could also be estimated by coupling D20 above-ground and/or E601B sunken pans through CR. Additionally, to overcome the modification of the specific components in the CR, we also evaluated the Nonlinear-CR model and the Morton's CRAE model. The former does not need the pre-determination of the asymmetry of CR, while the latter does not require the wind speed data as input. We found that both models are also able to simulate the daily ETa well provided their parameter values have been locally calibrated. The sensitivity analysis shows that, if the measured ETa data are absence to calibrate the models' parameter values, the Nonlinear-CR model may be a particularly good way for estimating ETabecause of its mild sensitivity to the parameter values making possible to employ published parameter values derived under similar climatic and land cover conditions. The CRAE model should also be highlighted in the TP since the special topography make the wind speed data suffer large uncertainties when the advanced geo-statistical method was used to spatially interpolate the point-based meteorological records.

  12. A nonlinear mechanics model of bio-inspired hierarchical lattice materials consisting of horseshoe microstructures

    PubMed Central

    Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui

    2016-01-01

    Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions. PMID:27087704

  13. A nonlinear mechanics model of bio-inspired hierarchical lattice materials consisting of horseshoe microstructures

    NASA Astrophysics Data System (ADS)

    Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui

    2016-05-01

    Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions.

  14. A nonlinear mechanics model of bio-inspired hierarchical lattice materials consisting of horseshoe microstructures.

    PubMed

    Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A; Huang, Yonggang; Zhang, Yihui

    2016-05-01

    Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions.

  15. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR PREPARATION OF CALIBRATION AND SURROGATE RECOVERY SOLUTIONS FOR GC/MS ANALYSIS OF PESTICIDES (BCO-L-21.1)

    EPA Science Inventory

    The purpose of this SOP is to describe procedures for preparing calibration curve solutions used for gas chromatography/mass spectrometry (GC/MS) analysis of chlorpyrifos, diazinon, malathion, DDT, DDE, DDD, a-chlordane, and g-chlordane in dust, soil, air, and handwipe sample ext...

  16. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  17. A comparison of the Injury Severity Score and the Trauma Mortality Prediction Model.

    PubMed

    Cook, Alan; Weddle, Jo; Baker, Susan; Hosmer, David; Glance, Laurent; Friedman, Lee; Osler, Turner

    2014-01-01

    Performance benchmarking requires accurate measurement of injury severity. Despite its shortcomings, the Injury Severity Score (ISS) remains the industry standard 40 years after its creation. A new severity measure, the Trauma Mortality Prediction Model (TMPM), uses either the Abbreviated Injury Scale (AIS) or DRG International Classification of Diseases-9th Rev. (ICD-9) lexicons and may better quantify injury severity compared with ISS. We compared the performance of TMPM with ISS and other measures of injury severity in a single cohort of patients. We included 337,359 patient records with injuries reliably described in both the AIS and the ICD-9 lexicons from the National Trauma Data Bank. Five injury severity measures (ISS, maximum AIS score, New Injury Severity Score [NISS], ICD-9-Based Injury Severity Score [ICISS], TMPM) were computed using either the AIS or ICD-9 codes. These measures were compared for discrimination (area under the receiver operating characteristic curve), an estimate of proximity to a model that perfectly predicts the outcome (Akaike information criterion), and model calibration curves. TMPM demonstrated superior receiver operating characteristic curve, Akaike information criterion, and calibration using either the AIS or ICD-9 lexicons. Calibration plots demonstrate the monotonic characteristics of the TMPM models contrasted by the nonmonotonic features of the other prediction models. Severity measures were more accurate with the AIS lexicon rather than ICD-9. NISS proved superior to ISS in either lexicon. Since NISS is simpler to compute, it should replace ISS when a quick estimate of injury severity is required for AIS-coded injuries. Calibration curves suggest that the nonmonotonic nature of ISS may undermine its performance. TMPM demonstrated superior overall mortality prediction compared with all other models including ISS whether the AIS or ICD-9 lexicons were used. Because TMPM provides an absolute probability of death, it may allow clinicians to communicate more precisely with one another and with patients and families. Disagnostic study, level I; prognostic study, level II.

  18. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements.

    PubMed

    McJimpsey, Erica L

    2016-02-25

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.

  19. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements

    NASA Astrophysics Data System (ADS)

    McJimpsey, Erica L.

    2016-02-01

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.

  20. Fitting Richards' curve to data of diverse origins

    USGS Publications Warehouse

    Johnson, D.H.; Sargeant, A.B.; Allen, S.H.

    1975-01-01

    Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.

  1. Rapid prediction of total petroleum hydrocarbons concentration in contaminated soil using vis-NIR spectroscopy and regression techniques.

    PubMed

    Douglas, R K; Nawar, S; Alamar, M C; Mouazen, A M; Coulon, F

    2018-03-01

    Visible and near infrared spectrometry (vis-NIRS) coupled with data mining techniques can offer fast and cost-effective quantitative measurement of total petroleum hydrocarbons (TPH) in contaminated soils. Literature showed however significant differences in the performance on the vis-NIRS between linear and non-linear calibration methods. This study compared the performance of linear partial least squares regression (PLSR) with a nonlinear random forest (RF) regression for the calibration of vis-NIRS when analysing TPH in soils. 88 soil samples (3 uncontaminated and 85 contaminated) collected from three sites located in the Niger Delta were scanned using an analytical spectral device (ASD) spectrophotometer (350-2500nm) in diffuse reflectance mode. Sequential ultrasonic solvent extraction-gas chromatography (SUSE-GC) was used as reference quantification method for TPH which equal to the sum of aliphatic and aromatic fractions ranging between C 10 and C 35 . Prior to model development, spectra were subjected to pre-processing including noise cut, maximum normalization, first derivative and smoothing. Then 65 samples were selected as calibration set and the remaining 20 samples as validation set. Both vis-NIR spectrometry and gas chromatography profiles of the 85 soil samples were subjected to RF and PLSR with leave-one-out cross-validation (LOOCV) for the calibration models. Results showed that RF calibration model with a coefficient of determination (R 2 ) of 0.85, a root means square error of prediction (RMSEP) 68.43mgkg -1 , and a residual prediction deviation (RPD) of 2.61 outperformed PLSR (R 2 =0.63, RMSEP=107.54mgkg -1 and RDP=2.55) in cross-validation. These results indicate that RF modelling approach is accounting for the nonlinearity of the soil spectral responses hence, providing significantly higher prediction accuracy compared to the linear PLSR. It is recommended to adopt the vis-NIRS coupled with RF modelling approach as a portable and cost effective method for the rapid quantification of TPH in soils. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system

    PubMed Central

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-01

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer’s Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26-cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients. PMID:28033119

  3. A Comparison of Radiometric Calibration Techniques for Lunar Impact Flashes

    NASA Technical Reports Server (NTRS)

    Suggs, R.

    2016-01-01

    Video observations of lunar impact flashes have been made by a number of researchers since the late 1990's and the problem of determination of the impact energies has been approached in different ways (Bellot Rubio, et al., 2000 [1], Bouley, et al., 2012.[2], Suggs, et al. 2014 [3], Rembold and Ryan 2015 [4], Ortiz, et al. 2015 [5]). The wide spectral response of the unfiltered video cameras in use for all published measurements necessitates color correction for the standard filter magnitudes available for the comparison stars. An estimate of the color of the impact flash is also needed to correct it to the chosen passband. Magnitudes corrected to standard filters are then used to determine the luminous energy in the filter passband according to the stellar atmosphere calibrations of Bessell et al., 1998 [6]. Figure 1 illustrates the problem. The camera pass band is the wide black curve and the blue, green, red, and magenta curves show the band passes of the Johnson-Cousins B, V, R, and I filters for which we have calibration star magnitudes. The blackbody curve of an impact flash of temperature 2800K (Nemtchinov, et al., 1998 [7]) is the dashed line. This paper compares the various photometric calibration techniques and how they address the color corrections necessary for the calculation of luminous energy (radiometry) of impact flashes. This issue has significant implications for determination of luminous efficiency, predictions of impact crater sizes for observed flashes, and the flux of meteoroids in the 10s of grams to kilogram size range.

  4. Observation of nonlinear dissipation in piezoresistive diamond nanomechanical resonators by heterodyne down-mixing.

    PubMed

    Imboden, Matthias; Williams, Oliver A; Mohanty, Pritiraj

    2013-09-11

    We report the observation of nonlinear dissipation in diamond nanomechanical resonators measured by an ultrasensitive heterodyne down-mixing piezoresistive detection technique. The combination of a hybrid structure as well as symmetry breaking clamps enables sensitive piezoresistive detection of multiple orthogonal modes in a diamond resonator over a wide frequency and temperature range. Using this detection method, we observe the transition from purely linear dissipation at room temperature to strongly nonlinear dissipation at cryogenic temperatures. At high drive powers and below liquid nitrogen temperatures, the resonant structure dynamics follows the Pol-Duffing equation of motion. Instead of using the broadening of the full width at half-maximum, we propose a nonlinear dissipation backbone curve as a method to characterize the strength of nonlinear dissipation in devices with a nonlinear spring constant.

  5. S-NPP VIIRS thermal emissive bands on-orbit calibration and performance

    NASA Astrophysics Data System (ADS)

    Efremova, Boryana; McIntire, Jeff; Moyer, David; Wu, Aisheng; Xiong, Xiaoxiong

    2014-09-01

    Presented is an assessment of the on-orbit radiometric performance of the thermal emissive bands (TEB) of the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) instrument based on data from the first 2 years of operations—from 20 January 2012 to 20 January 2014. The VIIRS TEB are calibrated on orbit using a V-grooved blackbody (BB) as a radiance source. Performance characteristics trended over the life of the mission include the F factor—a measure of the gain change of the TEB detectors; the Noise Equivalent differential Temperature (NEdT)—a measure of the detector noise; and the detector offset and nonlinear terms trended at the quarterly performed BB warm-up cool-down cycles. We find that the BB temperature is well controlled and stable within the 30mK requirement. The F factor trends are very stable and showing little degradation (within 0.8%). The offsets and nonlinearity terms are also without noticeable drifts. NEdT is stable and does not show any trend. Other TEB radiometric calibration-related activities discussed include the on-orbit assessment of the response versus scan-angle functions and an approach to improve the M13 low-gain calibration using onboard lunar measurements. We conclude that all the assessed parameters comply with the requirements, and the TEB provide radiometric measurements with the required accuracy.

  6. Contaminant concentration in environmental samples using LIBS and CF-LIBS

    NASA Astrophysics Data System (ADS)

    Pandhija, S.; Rai, N. K.; Rai, A. K.; Thakur, S. N.

    2010-01-01

    The present paper deals with the detection and quantification of toxic heavy metals like Cd, Co, Pb, Zn, Cr, etc. in environmental samples by using the technique of laser-induced breakdown spectroscopy (LIBS) and calibration-free LIBS (CF-LIBS). A MATLABTM program has been developed based on the CF-LIBS algorithm given by earlier workers and concentrations of pollutants present in industrial area soil have been determined. LIBS spectra of a number of certified reference soil samples with varying concentrations of toxic elements (Cd, Zn) have been recorded to obtain calibration curves. The concentrations of Cd and Zn in soil samples from the Jajmau area, Kanpur (India) have been determined by using these calibration curves and also by the CF-LIBS approach. Our results clearly demonstrate that the combination of LIBS and CF-LIBS is very useful for the study of pollutants in the environment. Some of the results have also been found to be in good agreement with those of ICP-OES.

  7. Traction curves for the decohesion of covalent crystals

    NASA Astrophysics Data System (ADS)

    Enrique, Raúl A.; Van der Ven, Anton

    2017-01-01

    We study, by first principles, the energy versus separation curves for the cleavage of a family of covalent crystals with the diamond and zincblende structure. We find that there is universality in the curves for different materials which is chemistry independent but specific to the geometry of the particular cleavage plane. Since these curves do not strictly follow the universal binding energy relationship (UBER), we present a derivation of an extension to this relationship that includes non-linear force terms. This extended form of UBER allows for a flexible and practical mathematical description of decohesion curves that can be applied to the quantification of cohesive zone models.

  8. LaPlace Transform1 Adaptive Control Law in Support of Large Flight Envelope Modeling Work

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Xargay, Enric; Cao, Chengyu; Hovakimyan, Naira

    2011-01-01

    This paper presents results of a flight test of the L1 adaptive control architecture designed to directly compensate for significant uncertain cross-coupling in nonlinear systems. The flight test was conducted on the subscale turbine powered Generic Transport Model that is an integral part of the Airborne Subscale Transport Aircraft Research system at the NASA Langley Research Center. The results presented are in support of nonlinear aerodynamic modeling and instrumentation calibration.

  9. Sliding Mode Control of a Slewing Flexible Beam

    NASA Technical Reports Server (NTRS)

    Wilson, David G.; Parker, Gordon G.; Starr, Gregory P.; Robinett, Rush D., III

    1997-01-01

    An output feedback sliding mode controller (SMC) is proposed to minimize the effects of vibrations of slewing flexible manipulators. A spline trajectory is used to generate ideal position and velocity commands. Constrained nonlinear optimization techniques are used to both calibrate nonlinear models and determine optimized gains to produce a rest-to-rest, residual vibration-free maneuver. Vibration-free maneuvers are important for current and future NASA space missions. This study required the development of the nonlinear dynamic system equations of motion; robust control law design; numerical implementation; system identification; and verification using the Sandia National Laboratories flexible robot testbed. Results are shown for a slewing flexible beam.

  10. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  11. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  12. 3-Dimensional quantitative detection of nanoparticle content in biological tissue samples after local cancer treatment

    NASA Astrophysics Data System (ADS)

    Rahn, Helene; Alexiou, Christoph; Trahms, Lutz; Odenbach, Stefan

    2014-06-01

    X-ray computed tomography is nowadays used for a wide range of applications in medicine, science and technology. X-ray microcomputed tomography (XμCT) follows the same principles used for conventional medical CT scanners, but improves the spatial resolution to a few micrometers. We present an example of an application of X-ray microtomography, a study of 3-dimensional biodistribution, as along with the quantification of nanoparticle content in tumoral tissue after minimally invasive cancer therapy. One of these minimal invasive cancer treatments is magnetic drug targeting, where the magnetic nanoparticles are used as controllable drug carriers. The quantification is based on a calibration of the XμCT-equipment. The developed calibration procedure of the X-ray-μCT-equipment is based on a phantom system which allows the discrimination between the various gray values of the data set. These phantoms consist of a biological tissue substitute and magnetic nanoparticles. The phantoms have been studied with XμCT and have been examined magnetically. The obtained gray values and nanoparticle concentration lead to a calibration curve. This curve can be applied to tomographic data sets. Accordingly, this calibration enables a voxel-wise assignment of gray values in the digital tomographic data set to nanoparticle content. Thus, the calibration procedure enables a 3-dimensional study of nanoparticle distribution as well as concentration.

  13. Synthesis of nonlinear frequency responses with experimentally extracted nonlinear modes

    NASA Astrophysics Data System (ADS)

    Peter, Simon; Scheel, Maren; Krack, Malte; Leine, Remco I.

    2018-02-01

    Determining frequency response curves is a common task in the vibration analysis of nonlinear systems. Measuring nonlinear frequency responses is often challenging and time consuming due to, e.g., coexisting stable or unstable vibration responses and structure-exciter-interaction. The aim of the current paper is to develop a method for the synthesis of nonlinear frequency responses near an isolated resonance, based on data that can be easily and automatically obtained experimentally. The proposed purely experimental approach relies on (a) a standard linear modal analysis carried out at low vibration levels and (b) a phase-controlled tracking of the backbone curve of the considered forced resonance. From (b), the natural frequency and vibrational deflection shape are directly obtained as a function of the vibration level. Moreover, a damping measure can be extracted by power considerations or from the linear modal analysis. In accordance with the single nonlinear mode assumption, the near-resonant frequency response can then be synthesized using this data. The method is applied to a benchmark structure consisting of a cantilevered beam attached to a leaf spring undergoing large deflections. The results are compared with direct measurements of the frequency response. The proposed approach is fast, robust and provides a good estimate for the frequency response. It is also found that direct frequency response measurement is less robust due to bifurcations and using a sine sweep excitation with a conventional force controller leads to underestimation of maximum vibration response.

  14. Bayesian inference of Calibration curves: application to archaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lanos, P.

    2003-04-01

    The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.

  15. The Application of Various Nonlinear Models to Describe Academic Growth Trajectories: An Empirical Analysis Using Four-Wave Longitudinal Achievement Data from a Large Urban School District

    ERIC Educational Resources Information Center

    Shin, Tacksoo

    2012-01-01

    This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…

  16. Nonlinear deformation of composites with consideration of the effect of couple-stresses

    NASA Astrophysics Data System (ADS)

    Lagzdiņš, A.; Teters, G.; Zilaucs, A.

    1998-09-01

    Nonlinear deformation of spatially reinforced composites under active loading (without unloading) is considered. All the theoretical constructions are based on the experimental data on unidirectional and ±π/4 cross-ply epoxy plastics reinforced with glass fibers. Based on the elastic properties of the fibers and EDT-10 epoxy binder, the linear elastic characteristics of a transversely isotropic unidirectionally reinforced fiberglass plastic are found, whereas the nonlinear characteristics are obtained from experiments. For calculating the deformation properties of the ±π/4 cross-ply plastic, a refined version of the Voigt method is applied taking into account also the couple-stresses arising in the composite due to relative rotation of the reinforcement fibers. In addition, a fourth-rank damage tensor is introduced in order to account for the impact of fracture caused by the couple-stresses. The unknown constants are found from the experimental uniaxial tension curve for the cross-ply composite. The comparison between the computed curves and experimental data for other loading paths shows that the description of the nonlinear behavior of composites can be improved by considering the effect of couple-stresses generated by rotations of the reinforcing fibers.

  17. Out-of-unison resonance in weakly nonlinear coupled oscillators

    PubMed Central

    Hill, T. L.; Cammarano, A.; Neild, S. A.; Wagg, D. J.

    2015-01-01

    Resonance is an important phenomenon in vibrating systems and, in systems of nonlinear coupled oscillators, resonant interactions can occur between constituent parts of the system. In this paper, out-of-unison resonance is defined as a solution in which components of the response are 90° out-of-phase, in contrast to the in-unison responses that are normally considered. A well-known physical example of this is whirling, which can occur in a taut cable. Here, we use a normal form technique to obtain time-independent functions known as backbone curves. Considering a model of a cable, this approach is used to identify out-of-unison resonance and it is demonstrated that this corresponds to whirling. We then show how out-of-unison resonance can occur in other two degree-of-freedom nonlinear oscillators. Specifically, an in-line oscillator consisting of two masses connected by nonlinear springs—a type of system where out-of-unison resonance has not previously been identified—is shown to have specific parameter regions where out-of-unison resonance can occur. Finally, we demonstrate how the backbone curve analysis can be used to predict the responses of forced systems. PMID:25568619

  18. On the nonlinear stability of the unsteady, viscous flow of an incompressible fluid in a curved pipe

    NASA Technical Reports Server (NTRS)

    Shortis, Trudi A.; Hall, Philip

    1995-01-01

    The stability of the flow of an incompressible, viscous fluid through a pipe of circular cross-section curved about a central axis is investigated in a weakly nonlinear regime. A sinusoidal pressure gradient with zero mean is imposed, acting along the pipe. A WKBJ perturbation solution is constructed, taking into account the need for an inner solution in the vicinity of the outer bend, which is obtained by identifying the saddle point of the Taylor number in the complex plane of the cross-sectional angle co-ordinate. The equation governing the nonlinear evolution of the leading order vortex amplitude is thus determined. The stability analysis of this flow to periodic disturbances leads to a partial differential system dependent on three variables, and since the differential operators in this system are periodic in time, Floquet theory may be applied to reduce this system to a coupled infinite system of ordinary differential equations, together with homogeneous uncoupled boundary conditions. The eigenvalues of this system are calculated numerically to predict a critical Taylor number consistent with the analysis of Papageorgiou. A discussion of how nonlinear effects alter the linear stability analysis is also given, and the nature of the instability determined.

  19. Preparation of calibration materials for microanalysis of Ti minerals by direct fusion of synthetic and natural materials: experience with LA-ICP-MS analysis of some important minor and trace elements in ilmenite and rutile.

    PubMed

    Odegård, M; Mansfeld, J; Dundas, S H

    2001-08-01

    Calibration materials for microanalysis of Ti minerals have been prepared by direct fusion of synthetic and natural materials by resistance heating in high-purity graphite electrodes. Synthetic materials were FeTiO3 and TiO2 reagents doped with minor and trace elements; CRMs for ilmenite, rutile, and a Ti-rich magnetite were used as natural materials. Problems occurred during fusion of Fe2O3-rich materials, because at atmospheric pressure Fe2O3 decomposes into Fe3O4 and O2 at 1462 degrees C. An alternative fusion technique under pressure was tested, but the resulting materials were characterized by extensive segregation and development of separate phases. Fe2O3-rich materials were therefore fused below this temperature, resulting in a form of sintering, without conversion of the materials into amorphous glasses. The fused materials were studied by optical microscopy and EPMA, and tested as calibration materials by inductively coupled plasma mass spectrometry, equipped with laser ablation for sample introduction (LA-ICP-MS). It was demonstrated that calibration curves based on materials of rutile composition, within normal analytical uncertainty, generally coincide with calibration curves based on materials of ilmenite composition. It is, therefore, concluded that LA-ICP-MS analysis of Ti minerals can with advantage be based exclusively on calibration materials prepared for rutile, thereby avoiding the special fusion problems related to oxide mixtures of ilmenite composition. It is documented that sintered materials were in good overall agreement with homogeneous glass materials, an observation that indicates that in other situations also sintered mineral concentrates might be a useful alternative for instrument calibration, e.g. as alternative to pressed powders.

  20. Laboratory Measurement of the Brighter-fatter Effect in an H2RG Infrared Detector

    NASA Astrophysics Data System (ADS)

    Plazas, A. A.; Shapiro, C.; Smith, R.; Huff, E.; Rhodes, J.

    2018-06-01

    The “brighter-fatter” (BF) effect is a phenomenon—originally discovered in charge coupled devices—in which the size of the detector point-spread function (PSF) increases with brightness. We present, for the first time, laboratory measurements demonstrating the existence of the effect in a Hawaii-2RG HgCdTe near-infrared (NIR) detector. We use JPL’s Precision Projector Laboratory, a facility for emulating astronomical observations with UV/VIS/NIR detectors, to project about 17,000 point sources onto the detector to stimulate the effect. After calibrating the detector for nonlinearity with flat-fields, we find evidence that charge is nonlinearly shifted from bright pixels to neighboring pixels during exposures of point sources, consistent with the existence of a BF-type effect. NASAs Wide Field Infrared Survey Telescope (WFIRST) will use similar detectors to measure weak gravitational lensing from the shapes of hundreds of million of galaxies in the NIR. The WFIRST PSF size must be calibrated to ≈0.1% to avoid biased inferences of dark matter and dark energy parameters; therefore further study and calibration of the BF effect in realistic images will be crucial.

  1. Psychophysical Calibration of Mobile Touch-Screens for Vision Testing in the Field

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2015-01-01

    The now ubiquitous nature of touch-screen displays in cell phones and tablet computers makes them an attractive option for vision testing outside of the laboratory or clinic. Accurate measurement of parameters such as contrast sensitivity, however, requires precise control of absolute and relative screen luminances. The nonlinearity of the display response (gamma) can be measured or checked using a minimum motion technique similar to that developed by Anstis and Cavanagh (1983) for the determination of isoluminance. While the relative luminances of the color primaries vary between subjects (due to factors such as individual differences in pre-retinal pigment densities), the gamma nonlinearity can be checked in the lab using a photometer. Here we compare results obtained using the psychophysical method with physical measurements for a number of different devices. In addition, we present a novel physical method using the device's built-in front-facing camera in conjunction with a mirror to jointly calibrate the camera and display. A high degree of consistency between devices is found, but some departures from ideal performance are observed. In spite of this, the effects of calibration errors and display artifacts on estimates of contrast sensitivity are found to be small.

  2. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  3. Receiver calibration and the nonlinearity parameter measurement of thick solid samples with diffraction and attenuation corrections.

    PubMed

    Jeong, Hyunjo; Barnard, Daniel; Cho, Sungjong; Zhang, Shuzeng; Li, Xiongbing

    2017-11-01

    This paper presents analytical and experimental techniques for accurate determination of the nonlinearity parameter (β) in thick solid samples. When piezoelectric transducers are used for β measurements, the receiver calibration is required to determine the transfer function from which the absolute displacement can be calculated. The measured fundamental and second harmonic displacement amplitudes should be modified to account for beam diffraction and material absorption. All these issues are addressed in this study and the proposed technique is validated through the β measurements of thick solid samples. A simplified self-reciprocity calibration procedure for a broadband receiver is described. The diffraction and attenuation corrections for the fundamental and second harmonics are explicitly derived. Aluminum alloy samples in five different thicknesses (4, 6, 8, 10, 12cm) are prepared and β measurements are made using the finite amplitude, through-transmission method. The effects of diffraction and attenuation corrections on β measurements are systematically investigated. When diffraction and attenuation corrections are all properly made, the variation of β between different thickness samples is found to be less than 3.2%. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. On the absolute calibration of SO2 cameras

    USGS Publications Warehouse

    Lübcke, Peter; Bobrowski, Nicole; Illing, Sebastian; Kern, Christoph; Alvarez Nieves, Jose Manuel; Vogel, Leif; Zielcke, Johannes; Delgados Granados, Hugo; Platt, Ulrich

    2013-01-01

    This work investigates the uncertainty of results gained through the two commonly used, but quite different, calibration methods (DOAS and calibration cells). Measurements with three different instruments, an SO2 camera, a NFOVDOAS system and an Imaging DOAS (I-DOAS), are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The respective results are compared with measurements from an I-DOAS to verify the calibration curve over the spatial extent of the image. The results show that calibration cells, while working fine in some cases, can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. The measurements presented in this work were taken at Popocatepetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 and 14.34 kg s−1 were observed.

  5. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    PubMed

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  6. Static Flow Characteristics of a Mass Flow Injecting Valve

    NASA Technical Reports Server (NTRS)

    Mattern, Duane; Paxson, Dan

    1995-01-01

    A sleeve valve is under development for ground-based forced response testing of air compression systems. This valve will be used to inject air and to impart momentum to the flow inside the first stage of a multi-stage compressor. The valve was designed to deliver a maximum mass flow of 0.22 lbm/s (0.1 kg/s) with a maximum valve throat area of 0.12 sq. in (80 sq. mm), a 100 psid (689 KPA) pressure difference across the valve and a 68 F, (20 C) air supply. It was assumed that the valve mass flow rate would be proportional to the valve orifice area. A static flow calibration revealed a nonlinear valve orifice area to mass flow relationship which limits the maximum flow rate that the valve can deliver. This nonlinearity was found to be caused by multiple choking points in the flow path. A simple model was used to explain this nonlinearity and the model was compared to the static flow calibration data. Only steady flow data is presented here. In this report, the static flow characteristics of a proportionally controlled sleeve valve are modelled and validated against experimental data.

  7. Understanding the relationship between duration of untreated psychosis and outcomes: A statistical perspective.

    PubMed

    Hannigan, Ailish; Bargary, Norma; Kinsella, Anthony; Clarke, Mary

    2017-06-14

    Although the relationships between duration of untreated psychosis (DUP) and outcomes are often assumed to be linear, few studies have explored the functional form of these relationships. The aim of this study is to demonstrate the potential of recent advances in curve fitting approaches (splines) to explore the form of the relationship between DUP and global assessment of functioning (GAF). Curve fitting approaches were used in models to predict change in GAF at long-term follow-up using DUP for a sample of 83 individuals with schizophrenia. The form of the relationship between DUP and GAF was non-linear. Accounting for non-linearity increased the percentage of variance in GAF explained by the model, resulting in better prediction and understanding of the relationship. The relationship between DUP and outcomes may be complex and model fit may be improved by accounting for the form of the relationship. This should be routinely assessed and new statistical approaches for non-linear relationships exploited, if appropriate. © 2017 John Wiley & Sons Australia, Ltd.

  8. TG study of the Li0.4Fe2.4Zn0.2O4 ferrite synthesis

    NASA Astrophysics Data System (ADS)

    Lysenko, E. N.; Nikolaev, E. V.; Surzhikov, A. P.

    2016-02-01

    In this paper, the kinetic analysis of Li-Zn ferrite synthesis was studied using thermogravimetry (TG) method through the simultaneous application of non-linear regression to several measurements run at different heating rates (multivariate non-linear regression). Using TG-curves obtained for the four heating rates and Netzsch Thermokinetics software package, the kinetic models with minimal adjustable parameters were selected to quantitatively describe the reaction of Li-Zn ferrite synthesis. It was shown that the experimental TG-curves clearly suggest a two-step process for the ferrite synthesis and therefore a model-fitting kinetic analysis based on multivariate non-linear regressions was conducted. The complex reaction was described by a two-step reaction scheme consisting of sequential reaction steps. It is established that the best results were obtained using the Yander three-dimensional diffusion model at the first stage and Ginstling-Bronstein model at the second step. The kinetic parameters for lithium-zinc ferrite synthesis reaction were found and discussed.

  9. Linearization of calibration curves by aerosol carrier effect of CCl 4 vapor in electrothermal vaporization inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Kántor, Tibor; de Loos-Vollebregt, Margaretha T. C.

    2005-03-01

    Carbon tetrachloride vapor as gaseous phase modifier in a graphite furnace electrothermal vaporizer (GFETV) converts heavy volatile analyte forms to volatile and medium volatile chlorides and produces aerosol carrier effect, the latter being a less generally recognized benefit. However, the possible increase of polyatomic interferences in inductively coupled plasma mass spectrometry (GFETV-ICP-MS) by chlorine and carbon containing species due to CCl 4 vapor introduction has been discouraging with the use of low resolution, quadrupole type MS equipment. Being aware of this possible handicap, it was aimed at to investigate the feasibility of the use of this halogenating agent in ICP-MS with regard of possible hazards to the instrument, and also to explore the advantages under these specific conditions. With sample gas flow (inner gas flow) rate not higher than 900 ml min -1 Ar in the torch and 3 ml min -1 CCl 4 vapor flow rate in the furnace, the long-term stability of the instrument was ensured and the following benefits by the halocarbon were observed. The non-linearity error (defined in the text) of the calibration curves (signal versus mass functions) with matrix-free solution standards was 30-70% without, and 1-5% with CCl 4 vapor introduction, respectively, at 1 ng mass of Cu, Fe, Mn and Pb analytes. The sensitivity for these elements increased by 2-4-fold with chlorination, while the relative standard deviation (RSD) was essentially the same (2-5%) for the two cases in comparison. A vaporization temperature of 2650 °C was required for Cr in Ar atmosphere, while 2200 °C was sufficient in Ar + CCl 4 atmosphere to attain complete vaporization. Improvements in linear response and sensitivity were the highest for this least volatile element. The pyrolytic graphite layer inside the graphite tube was protected by the halocarbon, and tube life time was further increased by using traces of hydrocarbon vapor in the external sheath gas of the graphite furnace. Details of the modification of the gas supply for HGA-600MS furnace and the design of the volatilization device are described.

  10. [Prediction of postoperative nausea and vomiting using an artificial neural network].

    PubMed

    Traeger, M; Eberhart, A; Geldner, G; Morin, A M; Putzke, C; Wulf, H; Eberhart, L H J

    2003-12-01

    Postoperative nausea and vomiting (PONV) are still frequent side-effects after general anaesthesia. These unpleasant symptoms for the patients can be sufficiently reduced using a multimodal antiemetic approach. However, these efforts should be restricted to risk patients for PONV. Thus, predictive models are required to identify these patients before surgery. So far all risk scores to predict PONV are based on results of logistic regression analysis. Artificial neural networks (ANN) can also be used for prediction since they can take into account complex and non-linear relationships between predictive variables and the dependent item. This study presents the development of an ANN to predict PONV and compares its performance with two established simplified risk scores (Apfel's and Koivuranta's scores). The development of the ANN was based on data from 1,764 patients undergoing elective surgical procedures under balanced anaesthesia. The ANN was trained with 1,364 datasets and a further 400 were used for supervising the learning process. One of the 49 ANNs showing the best predictive performance was compared with the established risk scores with respect to practicability, discrimination (by means of the area under a receiver operating characteristics curve) and calibration properties (by means of a weighted linear regression between the predicted and the actual incidences of PONV). The ANN tested showed a statistically significant ( p<0.0001) and clinically relevant higher discriminating power (0.74; 95% confidence interval: 0.70-0.78) than the Apfel score (0.66; 95% CI: 0.61-0.71) or Koivuranta's score (0.69; 95% CI: 0.65-0.74). Furthermore, the agreement between the actual incidences of PONV and those predicted by the ANN was also better and near to an ideal fit, represented by the equation y=1.0x+0. The equations for the calibration curves were: KNN y=1.11x+0, Apfel y=0.71x+1, Koivuranta 0.86x-5. The improved predictive accuracy achieved by the ANN is clinically relevant. However, the disadvantages of this system prevail because a computer is required for risk calculation. Thus, we still recommend the use of one of the simplified risk scores for clinical practice.

  11. Data user's notes of the radio astronomy experiment aboard the OGO-V spacecraft

    NASA Technical Reports Server (NTRS)

    Haddock, F. T.; Breckenridge, S. L.

    1970-01-01

    General information concerning the low-frequency radiometer, instrument package launching and operation, and scientific objectives of the flight are provided. Calibration curves and correction factors, with general and detailed information on the preflight calibration procedure are included. The data acquisition methods and the format of the data reduction, both on 35 mm film and on incremental computer plots, are described.

  12. VOC identification and inter-comparison from laboratory biomass burning using PTR-MS and PIT-MS

    Treesearch

    C. Warneke; J. M. Roberts; P. Veres; J. Gilman; W. C. Kuster; I. Burling; R. Yokelson; J. A. de Gouw

    2011-01-01

    Volatile organic compounds (VOCs) emitted from fires of biomass commonly found in the southeast and southwest U.S. were investigated with PTR-MS and PIT-MS, which are capable of fast measurements of a large number of VOCs. Both instruments were calibrated with gas standards and mass dependent calibration curves are determined. The sensitivity of the PIT-MS linearly...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnell, E; Ferreira, C; Ahmad, S

    Purpose: Accuracy of a RSP-HU calibration curve produced for proton treatment planning is tested by comparing the treatment planning system dose grid to physical doses delivered on film by a Mevion S250 double-scattering proton unit. Methods: A single batch of EBT3 Gafchromic film was used for calibration and measurements. The film calibration curve was obtained using Mevion proton beam reference option 20 (15cm range, 10cm modulation). Paired films were positioned at the center of the spread out Bragg peak (SOBP) in solid water. The calibration doses were verified with an ion chamber, including background and doses from 20cGy to 350cGy.more » Films were scanned in a flatbed Epson-Expression 10000-XL scanner, and analyzed using the red channel. A Rando phantom was scanned with a GE LightSpeed CT Simulator. A single-field proton plan (Eclipse, Varian) was calculated to deliver 171cGy to the pelvis section (heterogeneous region), using a standard 4×4cm aperture without compensator, 7.89cm beam range, and 5.36cm SOBP. Varied depths of the calculated distal 90% isodose-line were recorded and compared. The dose distribution from film irradiated between Rando slices was compared with the calculated plans using RIT v.6.2. Results: Distal 90% isodose-line depth variation between CT scans was 2mm on average, and 4mm at maximum. Fine calculation of this variation was restricted by the dose calculation grid, as well as the slice thickness. Dose differences between calibrated film measurements and calculated doses were on average 5.93cGy (3.5%), with the large majority of differences forming a normal distribution around 3.5cGy (2%). Calculated doses were almost entirely greater than those measured. Conclusion: RSP to HU calibration curve is shown to produce distal depth variation within the margin of tolerance (±4.3mm) across all potential scan energies and protocols. Dose distribution calculation is accurate to 2–4% within the SOBP, including areas of high tissue heterogeneity.« less

  14. Radarclinometry

    USGS Publications Warehouse

    Wildey, R.L.

    1986-01-01

    A mathematical theory and a corresponding algorithm have been developed to derive topographic maps from radar images as photometric arrays. Thus, as radargrammetry is to photogrammetry, so radarclinometry is to photoclinometry. Photoclinometry is endowed with a fundamental indeterminacy principle even for terrain homogeneous in normal albedo. This arises from the fact that the geometric locus of orientations of the local surface normal that is consistent with a given reflected specific-intensity of radiation is more complicated than a fixed line in space. For a radar image, the locus is a cone whose half-angle is the incidence angle and whose axis contains the radar. The indeterminacy is removed throughout a region if one possesses a control profile as a boundary-condition. In the absence of such ground-truth, a point-boundary-condition will suffice only in conjunction with a heuristic assumption, such as that the strike-line runs perpendicularly to the line-of-sight. In the present study I have implemented a more reasonable assumption which I call 'the hypothesis of local cylindricity'. Firstly, a general theory is derived, based solely on the implicit mathematical determinacy. This theory would be directly indicative of procedure if images were completely devoid of systematic error and noise. The theory produces topography by an area integration of radar brightness, starting from a control profile, without need of additional idealistic assumptions. But we have also theorized separately a method of forming this control profile, which method does require an additional assumption about the terrain. That assumption is that the curvature properties of the terrain are locally those of a cylinder of inferable orientation, within a second-order mathematical neighborhood of every point of the terrain. While local strike-and-dip completely determine the radar brightness itself, the terrain curvature determines the brightness-gradient in the radar image. Therefore, the control profile is formed as a line integration of brightness and its local gradient starting from a single point of the terrain where the local orientation of the strike-line is estimated by eye. Secondly, and independently, the calibration curve for pixel brightness versus incidence-angle is produced. I assume that an applicable curve can be found from the literature or elsewhere so that our problem is condensed to that of properly scaling the brightness-axis of the calibration curve. A first estimate is found by equating the average image brightness to the point on the brightness axis corresponding to the complement of the effective radar depression-angle, an angle assumed given. A statistical analysis is then used to correct, on the one hand, for the fact that the average brightness is not the brightness that corresponds to the average incidence angle, as a result of the non-linearity of the calibration curve; and on the other hand, we correct for the fact that the average incidence angle is not the same for a rough surface as it is for a flat surface (and therefore not the complement of the depression angle). Lastly, the practical modifications that were interactively evolved to produce an operational algorithm for treating real data are developed. They are by no means considered optimized at present. Such a possibility is thus far precluded by excessive computer-time. Most noteworthy in this respect is the abandonment of area integration away from a control profile. Instead, the topography is produced as a set of independent line integrations down each of the parallel range lines of the image, using the theory for control-profile formation. An adaptive technique, which now appears excessive, was also employed so that SEASAT images of sand dunes could be processed. In this, the radiometric calibration was iterated to force the endpoints of each profile to zero elevation. A secondary algorithm then employed line-averages of appropriate quantities to adjust the mean t

  15. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  16. Determination of the content of fatty acid methyl esters (FAME) in biodiesel samples obtained by esterification using 1H-NMR spectroscopy.

    PubMed

    Mello, Vinicius M; Oliveira, Flavia C C; Fraga, William G; do Nascimento, Claudia J; Suarez, Paulo A Z

    2008-11-01

    Three different calibration curves based on (1)H-NMR spectroscopy (300 MHz) were used for quantifying the reaction yield during biodiesel synthesis by esterification of fatty acids mixtures and methanol. For this purpose, the integrated intensities of the hydrogens of the ester methoxy group (3.67 ppm) were correlated with the areas related to the various protons of the alkyl chain (olefinic hydrogens: 5.30-5.46 ppm; aliphatic: 2.67-2.78 ppm, 2.30 ppm, 1.96-2.12 ppm, 1.56-1.68 ppm, 1.22-1.42 ppm, 0.98 ppm, and 0.84-0.92 ppm). The first curve was obtained using the peaks relating the olefinic hydrogens, a second with the parafinic protons and the third curve using the integrated intensities of all the hydrogens. A total of 35 samples were examined: 25 samples to build the three different calibration curves and ten samples to serve as external validation samples. The results showed no statistical differences among the three methods, and all presented prediction errors less than 2.45% with a co-efficient of variation (CV) of 4.66%. 2008 John Wiley & Sons, Ltd.

  17. A calibration method for patient specific IMRT QA using a single therapy verification film

    PubMed Central

    Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.

    2013-01-01

    Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly accurate IMRT verification. PMID:24416558

  18. A validated method for the quantitation of 1,1-difluoroethane using a gas in equilibrium method of calibration.

    PubMed

    Avella, Joseph; Lehrer, Michael; Zito, S William

    2008-10-01

    1,1-Difluoroethane (DFE), also known as Freon 152A, is a member of a class of compounds known as halogenated hydrocarbons. A number of these compounds have gained notoriety because of their ability to induce rapid onset of intoxication after inhalation exposure. Abuse of DFE has necessitated development of methods for its detection and quantitation in postmortem and human performance specimens. Furthermore, methodologies applicable to research studies are required as there have been limited toxicokinetic and toxicodynamic reports published on DFE. This paper describes a method for the quantitation of DFE using a gas chromatography-flame-ionization headspace technique that employs solventless standards for calibration. Two calibration curves using 0.5 mL whole blood calibrators which ranged from A: 0.225-1.350 to B: 9.0-180.0 mg/L were developed. These were evaluated for linearity (0.9992 and 0.9995), limit of detection of 0.018 mg/L, limit of quantitation of 0.099 mg/L (recovery 111.9%, CV 9.92%), and upper limit of linearity of 27,000.0 mg/L. Combined curve recovery results of a 98.0 mg/L DFE control that was prepared using an alternate technique was 102.2% with CV of 3.09%. No matrix interference was observed in DFE enriched blood, urine or brain specimens nor did analysis of variance detect any significant differences (alpha = 0.01) in the area under the curve of blood, urine or brain specimens at three identical DFE concentrations. The method is suitable for use in forensic laboratories because validation was performed on instrumentation routinely used in forensic labs and due to the ease with which the calibration range can be adjusted. Perhaps more importantly it is also useful for research oriented studies because the removal of solvent from standard preparation eliminates the possibility for solvent induced changes to the gas/liquid partitioning of DFE or chromatographic interference due to the presence of solvent in specimens.

  19. Predicting long-term neurological outcomes after severe traumatic brain injury requiring decompressive craniectomy: A comparison of the CRASH and IMPACT prognostic models.

    PubMed

    Honeybul, Stephen; Ho, Kwok M

    2016-09-01

    Predicting long-term neurological outcomes after severe traumatic brain (TBI) is important, but which prognostic model in the context of decompressive craniectomy has the best performance remains uncertain. This prospective observational cohort study included all patients who had severe TBI requiring decompressive craniectomy between 2004 and 2014, in the two neurosurgical centres in Perth, Western Australia. Severe disability, vegetative state, or death were defined as unfavourable neurological outcomes. Area under the receiver-operating-characteristic curve (AUROC) and slope and intercept of the calibration curve were used to assess discrimination and calibration of the CRASH (Corticosteroid-Randomisation-After-Significant-Head injury) and IMPACT (International-Mission-For-Prognosis-And-Clinical-Trial) models, respectively. Of the 319 patients included in the study, 119 (37%) had unfavourable neurological outcomes at 18-month after decompressive craniectomy for severe TBI. Both CRASH (AUROC 0.86, 95% confidence interval 0.81-0.90) and IMPACT full-model (AUROC 0.85, 95% CI 0.80-0.89) were similar in discriminating between favourable and unfavourable neurological outcome at 18-month after surgery (p=0.690 for the difference in AUROC derived from the two models). Although both models tended to over-predict the risks of long-term unfavourable outcome, the IMPACT model had a slightly better calibration than the CRASH model (intercept of the calibration curve=-4.1 vs. -5.7, and log likelihoods -159 vs. -360, respectively), especially when the predicted risks of unfavourable outcome were <80%. Both CRASH and IMPACT prognostic models were good in discriminating between favourable and unfavourable long-term neurological outcome for patients with severe TBI requiring decompressive craniectomy, but the calibration of the IMPACT full-model was better than the CRASH model. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  20. SU-E-T-223: Computed Radiography Dose Measurements of External Radiotherapy Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aberle, C; Kapsch, R

    2015-06-15

    Purpose: To obtain quantitative, two-dimensional dose measurements of external radiotherapy beams with a computed radiography (CR) system and to derive volume correction factors for ionization chambers in small fields. Methods: A commercial Kodak ACR2000i CR system with Kodak Flexible Phosphor Screen HR storage foils was used. Suitable measurement conditions and procedures were established. Several corrections were derived, including image fading, length-scale corrections and long-term stability corrections. Dose calibration curves were obtained for cobalt, 4 MV, 8 MV and 25 MV photons, and for 10 MeV, 15 MeV and 18 MeV electrons in a water phantom. Inherent measurement inhomogeneities were studiedmore » as well as directional dependence of the response. Finally, 2D scans with ionization chambers were directly compared to CR measurements, and volume correction factors were derived. Results: Dose calibration curves (0.01 Gy to 7 Gy) were obtained for multiple photon and electron beam qualities. For each beam quality, the calibration curves can be described by a single fit equation over the whole dose range. The energy dependence of the dose response was determined. The length scale on the images was adjusted scan-by-scan, typically by 2 percent horizontally and by 3 percent vertically. The remaining inhomogeneities after the system’s standard calibration procedure were corrected for. After correction, the homogeneity is on the order of a few percent. The storage foils can be rotated by up to 30 degrees without a significant effect on the measured signal. First results on the determination of volume correction factors were obtained. Conclusion: With CR, quantitative, two-dimensional dose measurements with a high spatial resolution (sub-mm) can be obtained over a large dose range. In order to make use of these advantages, several calibrations, corrections and supporting measurements are needed. This work was funded by the European Metrology Research Programme (EMRP) project HLT09 MetrExtRT Metrology for Radiotherapy using Complex Radiation Fields.« less

  1. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  2. Determination of vitamins D2 and D3 in selected food matrices by online high-performance liquid chromatography-gas chromatography-mass spectrometry (HPLC-GC-MS).

    PubMed

    Nestola, Marco; Thellmann, Andrea

    2015-01-01

    An online normal-phase liquid chromatography-gas chromatography-mass spectrometry (HPLC-GC-MS) method was developed for the determination of vitamins D2 and D3 in selected food matrices. Transfer of the sample from HPLC to GC was realized by large volume on-column injection; detection was performed with a time-of-flight mass spectrometer (TOF-MS). Typical GC problems in the determination of vitamin D such as sample degradation or sensitivity issues, previously reported in the literature, were not observed. Determination of total vitamin D content was done by quantitation of its pyro isomer based on an isotopically labelled internal standard (ISTD). Extracted ion traces of analyte and ISTD showed cross-contribution, but non-linearity of the calibration curve was not determined inside the chosen calibration range by selection of appropriate quantifier ions. Absolute limits of detection (LOD) and quantitation (LOQ) for vitamins D2 and D3 were calculated as approximately 50 and 150 pg, respectively. Repeatability with internal standard correction was below 2 %. Good agreement between quantitative results of an established high-performance liquid chromatography with UV detection (HPLC-UV) method and HPLC-GC-MS was found. Sterol-enriched margarine was subjected to HPLC-GC-MS and HPLC-MS/MS for comparison, because HPLC-UV showed strong matrix interferences. HPLC-GC-MS produced comparable results with less manual sample cleanup. In summary, online hyphenation of HPLC and GC allowed a minimization in manual sample preparation with an increase of sample throughput.

  3. Use of the Airborne Visible/Infrared Imaging Spectrometer to calibrate the optical sensor on board the Japanese Earth Resources Satellite-1

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Conel, James E.; Vandenbosch, Jeannette; Shimada, Masanobu

    1993-01-01

    We describe an experiment to calibrate the optical sensor (OPS) on board the Japanese Earth Resources Satellite-1 with data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). On 27 Aug. 1992 both the OPS and AVIRIS acquired data concurrently over a calibration target on the surface of Rogers Dry Lake, California. The high spectral resolution measurements of AVIRIS have been convolved to the spectral response curves of the OPS. These data in conjunction with the corresponding OPS digitized numbers have been used to generate the radiometric calibration coefficients for the eight OPS bands. This experiment establishes the suitability of AVIRIS for the calibration of spaceborne sensors in the 400 to 2500 nm spectral region.

  4. Radiometric and spectral calibrations of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) using principle component analysis

    NASA Astrophysics Data System (ADS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-10-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw GIFTS interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. The radiometric calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. The absolute radiometric performance of the instrument is affected by several factors including the FPA off-axis effect, detector/readout electronics induced nonlinearity distortions, and fore-optics offsets. The GIFTS-EDU, being the very first imaging spectrometer to use ultra-high speed electronics to readout its large area format focal plane array detectors, operating at wavelengths as large as 15 microns, possessed non-linearity's not easily removable in the initial calibration process. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts remaining after the initial radiometric calibration process, thus, further enhance the absolute calibration accuracy. This method is applied to data collected during an atmospheric measurement experiment with the GIFTS, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The PC vectors of the calibrated radiance spectra are defined from the AERI observations and regression matrices relating the initial GIFTS radiance PC scores to the AERI radiance PC scores are calculated using the least squares inverse method. A new set of accurately calibrated GIFTS radiances are produced using the first four PC scores in the regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period.

  5. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  6. Geometrically nonlinear resonance of higher-order shear deformable functionally graded carbon-nanotube-reinforced composite annular sector plates excited by harmonic transverse loading

    NASA Astrophysics Data System (ADS)

    Gholami, Raheb; Ansari, Reza

    2018-02-01

    This article presents an attempt to study the nonlinear resonance of functionally graded carbon-nanotube-reinforced composite (FG-CNTRC) annular sector plates excited by a uniformly distributed harmonic transverse load. To this purpose, first, the extended rule of mixture including the efficiency parameters is employed to approximately obtain the effective material properties of FG-CNTRC annular sector plates. Then, the focus is on presenting the weak form of discretized mathematical formulation of governing equations based on the variational differential quadrature (VDQ) method and Hamilton's principle. The geometric nonlinearity and shear deformation effects are considered based on the von Kármán assumptions and Reddy's third-order shear deformation plate theory, respectively. The discretization process is performed via the generalized differential quadrature (GDQ) method together with numerical differential and integral operators. Then, an efficient multi-step numerical scheme is used to obtain the nonlinear dynamic behavior of the FG-CNTRC annular sector plates near their primary resonance as the frequency-response curve. The accuracy of the present results is first verified and then a parametric study is presented to show the impacts of CNT volume fraction, CNT distribution pattern, geometry of annular sector plate and sector angle on the nonlinear frequency-response curve of FG-CNTRC annular sector plates with different edge supports.

  7. Giant enhancement of reflectance due to the interplay between surface confined wave modes and nonlinear gain in dielectric media.

    PubMed

    Kim, Sangbum; Kim, Kihong

    2017-12-11

    We study theoretically the interplay between the surface confined wave modes and the linear and nonlinear gain of the dielectric layer in the Otto configuration. The surface confined wave modes, such as surface plasmons or waveguide modes, are excited in the dielectric-metal bilayer by obliquely incident p waves. In the purely linear case, we find that the interplay between linear gain and surface confined wave modes can generate a large reflectance peak with its value much greater than 1. As the linear gain parameter increases, the peak appears at smaller incident angles, and the associated modes also change from surface plasmons to waveguide modes. When the nonlinear gain is turned on, the reflectance shows very strong multistability near the incident angles associated with surface confined wave modes. As the nonlinear gain parameter is varied, the reflectance curve undergoes complicated topological changes and sometimes displays separated closed curves. When the nonlinear gain parameter takes an optimally small value, a giant amplification of the reflectance by three orders of magnitude occurs near the incident angle associated with a waveguide mode. We also find that there exists a range of the incident angle where the wave is dissipated rather than amplified even in the presence of gain. We suggest that this can provide the basis for a possible new technology for thermal control in the subwavelength scale.

  8. From nonlinear Schrödinger hierarchy to some (2+1)-dimensional nonlinear pseudodifferential equations

    NASA Astrophysics Data System (ADS)

    Yang, Xiao; Du, Dianlou

    2010-08-01

    The Poisson structure on CN×RN is introduced to give the Hamiltonian system associated with a spectral problem which yields the nonlinear Schrödinger (NLS) hierarchy. The Hamiltonian system is proven to be Liouville integrable. Some (2+1)-dimensional equations including NLS equation, Kadomtesev-Petviashvili I (KPI) equation, coupled KPI equation, and modified Kadomtesev-Petviashvili (mKP) equation, are decomposed into Hamilton flows via the NLS hierarchy. The algebraic curve, Abel-Jacobi coordinates, and Riemann-Jacobi inversion are used to obtain the algebrogeometric solutions of these equations.

  9. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  10. A critical point: the problems associated with the variety of criteria to quantify the antioxidant capacity.

    PubMed

    Prieto, M A; Vázquez, J A; Murado, M A

    2014-06-18

    The oxidant action implies interfering in an autocatalytic process, in which no less than five chemical species are present (oxygen, oxidizable substrate, radicals, antioxidants, and oxidation products); furthermore, reactions of first and second order can take place, and interactions can occur, at several levels of the process. The common and incorrect practice is to use the single-time dose-response of an established antioxidant as a calibration curve to compute the equivalent antioxidant capacity of a sample, which is only tested at one single time-dose, assuming too many false aspects as true. Its use is unreasonable, given the availability of computational applications and instrumental equipment that, combined, provide the adequate tools to work with different variables in nonlinear models. The evaluation of the dose-time dependency of the response of the β-carotene method as a case study, using the combination of strong quantification procedures and a high amount of results with lower experimental error (applying microplate readers), reveals the lack of meaning of single-time criteria. Also, it demonstrates that, in most of the reactions, the time-dependent response in the oxidation process is inherently nonlinear and should not be standardized at one single time, because it would lead to unreliable results, hiding the real aspects of the response. In food matrices, the application of single-time criteria causes deficiencies in the control of the antioxidant content. Therefore, it is logical that, in the past decade, researchers have claimed consensus to increase the determination and effectiveness of antioxidant responses.

  11. Prediction of hydrographs and flow-duration curves in almost ungauged catchments: Which runoff measurements are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Pool, Sandra; Viviroli, Daniel; Seibert, Jan

    2017-11-01

    Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve simulations close to a well-informed model calibration. The differences among such strategies covering the full range of runoff magnitudes were small indicating that the exact choice of a strategy might be less crucial. Our study corroborates the information value of a small number of strategically selected runoff measurements for simulating runoff with a bucket-type runoff model in almost ungauged catchments.

  12. Z-scan theory for nonlocal nonlinear media with simultaneous nonlinear refraction and nonlinear absorption.

    PubMed

    Rashidian Vaziri, Mohammad Reza

    2013-07-10

    In this paper, the Z-scan theory for nonlocal nonlinear media has been further developed when nonlinear absorption and nonlinear refraction appear simultaneously. To this end, the nonlinear photoinduced phase shift between the impinging and outgoing Gaussian beams from a nonlocal nonlinear sample has been generalized. It is shown that this kind of phase shift will reduce correctly to its known counterpart for the case of pure refractive nonlinearity. Using this generalized form of phase shift, the basic formulas for closed- and open-aperture beam transmittances in the far field have been provided, and a simple procedure for interpreting the Z-scan results has been proposed. In this procedure, by separately performing open- and closed-aperture Z-scan experiments and using the represented relations for the far-field transmittances, one can measure the nonlinear absorption coefficient and nonlinear index of refraction as well as the order of nonlocality. Theoretically, it is shown that when the absorptive nonlinearity is present in addition to the refractive nonlinearity, the sample nonlocal response can noticeably suppress the peak and enhance the valley of the Z-scan closed-aperture transmittance curves, which is due to the nonlocal action's ability to change the beam transverse dimensions.

  13. Buckling Behavior of Compression-Loaded Quasi-Isotropic Curved Panels with a Circular Cutout

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Britt, Vicki O.; Nemeth, Michael P.

    1999-01-01

    Results from a numerical and experimental study of the response of compression-loaded quasi-isotropic curved panels with a centrally located circular cutout are presented. The numerical results were obtained by using a geometrically nonlinear finite element analysis code. The effects of cutout size, panel curvature and initial geo- metric imperfections on the overall response of compression-loaded panels are described. In addition, results are presented from a numerical parametric study that indicate the effects of elastic circumferential edge restraints on the prebuckling and buckling response of a selected panel and these numerical results are compared to experimentally measured results. These restraints are used to identify the effects of circumferential edge restraints that are introduced by the test fixture that was used in the present study. It is shown that circumferential edge restraints can introduce substantial nonlinear prebuckling deformations into shallow compression-loaded curved panels that can results in a significant increase in buckling load.

  14. Marangoni-induced symmetry-breaking pattern selection on viscous fluids

    NASA Astrophysics Data System (ADS)

    Shen, Li; Denner, Fabian; Morgan, Neal; van Wachem, Berend; Dini, Daniele

    2016-11-01

    Symmetry breaking transitions on curved surfaces are found in a wide range of dissipative systems, ranging from asymmetric cell divisions to structure formation in thin films. Inherent within the nonlinearities are the associated curvilinear geometry, the elastic stretching, bending and the various fluid dynamical processes. We present a generalised Swift-Hohenberg pattern selection theory on a thin, curved and viscous films in the presence of non-trivial Marangoni effect. Testing the theory with experiments on soap bubbles, we observe the film pattern selection to mimic that of the elastic wrinkling morphology on a curved elastic bilayer in regions of slow viscous flow. By examining the local state of damping of surface capillary waves we attempt to establish an equivalence between the Marangoni fluid dynamics and the nonlinear elastic shell theory above the critical wavenumber of the instabilities and propose a possible explanation for the perceived elastic-fluidic duality. The authors acknowledge the financial support of the Shell University Technology Centre for fuels and lubricants.

  15. Calibration-free optical chemical sensors

    DOEpatents

    DeGrandpre, Michael D.

    2006-04-11

    An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.

  16. Application of Multifunctional Doppler LIDAR for Noncontact Track Speed, Distance, and Curvature Assessment

    NASA Astrophysics Data System (ADS)

    Munoz, Joshua

    The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheelmounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to highfidelity distance and curvature data through utilization of processor clock rate and left-and rightrail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a "Hy-Rail" utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing highaccuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged.

  17. Absolute quantitation of disease protein biomarkers in a single LC-MS acquisition using apolipoprotein F as an example.

    PubMed

    Kumar, Abhinav; Gangadharan, Bevin; Cobbold, Jeremy; Thursz, Mark; Zitzmann, Nicole

    2017-09-21

    LC-MS and immunoassay can detect protein biomarkers. Immunoassays are more commonly used but can potentially be outperformed by LC-MS. These techniques have limitations including the necessity to generate separate calibration curves for each biomarker. We present a rapid mass spectrometry-based assay utilising a universal calibration curve. For the first time we analyse clinical samples using the HeavyPeptide IGNIS kit which establishes a 6-point calibration curve and determines the biomarker concentration in a single LC-MS acquisition. IGNIS was tested using apolipoprotein F (APO-F), a potential biomarker for non-alcoholic fatty liver disease (NAFLD). Human serum and IGNIS prime peptides were digested and the IGNIS assay was used to quantify APO-F in clinical samples. Digestion of IGNIS prime peptides was optimised using trypsin and SMART Digest™. IGNIS was 9 times faster than the conventional LC-MS method for determining the concentration of APO-F in serum. APO-F decreased across NAFLD stages. Inter/intra-day variation and stability post sample preparation for one of the peptides was ≤13% coefficient of variation (CV). SMART Digest™ enabled complete digestion in 30 minutes compared to 24 hours using in-solution trypsin digestion. We have optimised the IGNIS kit to quantify APO-F as a NAFLD biomarker in serum using a single LC-MS acquisition.

  18. Static SPME sampling of VOCs emitted from indoor building materials: prediction of calibration curves of single compounds for two different emission cells.

    PubMed

    Mocho, Pierre; Desauziers, Valérie

    2011-05-01

    Solid-phase microextraction (SPME) is a powerful technique, easy to implement for on-site static sampling of indoor VOCs emitted by building materials. However, a major constraint lies in the establishment of calibration curves which requires complex generation of standard atmospheres. Thus, the purpose of this paper is to propose a model to predict adsorption kinetics (i.e., calibration curves) of four model VOCs. The model is based on Fick's laws for the gas phase and on the equilibrium or the solid diffusion model for the adsorptive phase. Two samplers (the FLEC® and a home-made cylindrical emission cell), coupled to SPME for static sampling of material emissions, were studied. A good agreement between modeling and experimental data is observed and results show the influence of sampling rate on mass transfer mode in function of sample volume. The equilibrium model is adapted to quite large volume sampler (cylindrical cell) while the solid diffusion model is dedicated to small volume sampler (FLEC®). The limiting steps of mass transfer are the diffusion in gas phase for the cylindrical cell and the pore surface diffusion for the FLEC®. In the future, this modeling approach could be a useful tool for time-saving development of SPME to study building material emission in static mode sampling.

  19. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  20. [Spectrometric assessment of thyroid depth within the radioiodine test].

    PubMed

    Rink, T; Bormuth, F-J; Schroth, H-J; Braun, S; Zimny, M

    2005-01-01

    Aim of this study is the validation of a simple method for evaluating the depth of the target volume within the radioiodine test by analyzing the emitted iodine-131 energy spectrum. In a total of 250 patients (102 with a solitary autonomous nodule, 66 with multifocal autonomy, 29 with disseminated autonomy, 46 with Graves' disease, 6 for reducing goiter volume and 1 with only partly resectable papillary thyroid carcinoma), simultaneous uptake measurements in the Compton scatter (210 +/- 110 keV) and photopeak (364-45/+55 keV) windows were performed over one minute 24 hours after application of the 3 MBq test dose, with subsequent calculation of the respective count ratios. Measurements with a water-filled plastic neck phantom were carried out to perceive the relationship between these quotients and the average source depth and to get a calibration curve for calculating the depth of the target volume in the 250 patients for comparison with the sonographic reference data. Another calibration curve was obtained by evaluating the results of 125 randomly selected patient measurements to calculate the source depth in the other half of the group. The phantom measurements revealed a highly significant correlation (r = 0,99) between the count ratios and the source depth. Using these calibration data, a good relationship (r = 0,81, average deviation 6 mm corresponding to 22%) between the spectrometric and the sonographic depths was obtained. When using the calibration curve resulting from the 125 patient measurements, the overage deviation in the other half of the group was only 3 mm (12%). There was no difference between the disease groups. The described method allows on easy to use depth correction of the uptake measurements providing good results.

  1. Direct biomechanical modeling of trabecular bone using a nonlinear manifold-based volumetric representation

    NASA Astrophysics Data System (ADS)

    Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.

    2017-03-01

    Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.

  2. Fracture resistance of a TiB2 particle/SiC matrix composite at elevated temperature

    NASA Technical Reports Server (NTRS)

    Jenkins, Michael G.; Salem, Jonathan A.; Seshadri, Srinivasa G.

    1988-01-01

    The fracture resistance of a comercial TiB2 particle/SiC matrix composite was evaluated at temperatures ranging from 20 to 1400 C. A laser interferometric strain gauge (LISG) was used to continuously monitor the crack mouth opening displacement (CMOD) of the chevron-notched and straight-notched, three-point bend specimens used. Crack growth resistance curves (R-curves) were determined from the load versus displacement curves and displacement calibrations. Fracture toughness, work-of-fracture, and R-curve levels were found to decrease with increasing temperature. Microstructure, fracture surface, and oxidation coat were examined to explain the fracture behavior.

  3. Fracture resistance of a TiB2 particle/SiC matrix composite at elevated temperature

    NASA Technical Reports Server (NTRS)

    Jenkins, Michael G.; Salem, Jonathan A.; Seshadri, Srinivasa G.

    1989-01-01

    The fracture resistance of a commercial TiB2 particle/SiC matrix composite was evaluated at temperatures ranging from 20 to 1400 C. A laser interferometric strain gauge (LiSG) was used to continuously monitor the crack mouth opening displacement (CMOD) of the chevron-notched and straight-notched, three-point bend specimens used. Crack growth resistance curves (R-curves) were determined from the load versus displacement curves and displacement calibrations. Fracture toughness, work-of-fracture, and R-curve levels were found to decrease with increasing temperature. Microstructure, fracture surface, and oxidation coat were examined to explain the fracture behavior.

  4. Impacts of uncertainties in weather and streamflow observations in calibration and evaluation of an elevation distributed HBV-model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.

    2012-04-01

    The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.

  5. Measurement of large steel plates based on linear scan structured light scanning

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Li, Yaru; Lei, Geng; Xi, Jiangtao

    2018-01-01

    A measuring method based on linear structured light scanning is proposed to achieve the accurate measurement of the complex internal shape of large steel plates. Firstly, by using a calibration plate with round marks, an improved line scanning calibration method is designed. The internal and external parameters of camera are determined through the calibration method. Secondly, the images of steel plates are acquired by line scan camera. Then the Canny edge detection method is used to extract approximate contours of the steel plate images, the Gauss fitting algorithm is used to extract the sub-pixel edges of the steel plate contours. Thirdly, for the problem of inaccurate restoration of contour size, by measuring the distance between adjacent points in the grid of known dimensions, the horizontal and vertical error curves of the images are obtained. Finally, these horizontal and vertical error curves can be used to correct the contours of steel plates, and then combined with the calibration parameters of internal and external, the size of these contours can be calculated. The experiments results demonstrate that the proposed method can achieve the error of 1 mm/m in 1.2m×2.6m field of view, which has satisfied the demands of industrial measurement.

  6. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements

    PubMed Central

    McJimpsey, Erica L.

    2016-01-01

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves. PMID:26911983

  7. Research and development program for non-linear structural modeling with advanced time-temperature dependent constitutive relationships

    NASA Technical Reports Server (NTRS)

    Walker, K. P.

    1981-01-01

    Results of a 20-month research and development program for nonlinear structural modeling with advanced time-temperature constitutive relationships are reported. The program included: (1) the evaluation of a number of viscoplastic constitutive models in the published literature; (2) incorporation of three of the most appropriate constitutive models into the MARC nonlinear finite element program; (3) calibration of the three constitutive models against experimental data using Hastelloy-X material; and (4) application of the most appropriate constitutive model to a three dimensional finite element analysis of a cylindrical combustor liner louver test specimen to establish the capability of the viscoplastic model to predict component structural response.

  8. SU-E-T-299: Dosimetric Characterization of Small Field in Small Animal Irradiator with Radiochromic Films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, S; Kim, K; Jung, H

    Purpose: The small animal irradiator has been used with small animals to optimize new radiation therapy as preclinical studies. The small animal was irradiated by whole- or partial-body exposure. In this study, the dosimetric characterizations of small animal irradiator were carried out in small field using Radiochromic films Material & Methods: The study was performed in commercial animal irradiator (XRAD-320, Precision x-ray Inc, North Brantford) with Radiochromic films (EBT2, Ashland Inc, Covington). The calibration curve was generated between delivery dose and optical density (red channel) and the films were scanned by and Epson 1000XL scanner (Epson America Inc., Long Beach,more » CA).We evaluated dosimetric characterization of irradiator using various filter supported by manufacturer in 260 kV. The various filters were F1 (2.0mm Aluminum (HVL = about 1.0mm Cu) and F2 (0.75mm Tin + 0.25mm Copper + 1.5mm Aluminum (HVL = about 3.7mm Cu). According to collimator size (3, 5, 7, 10 mm, we calculated percentage depth dose (PDD) and the surface –source distance(SSD) was 17.3 cm considering dose rate. Results: The films were irradiated in 260 kV, 10mA and we increased exposure time 5sec. intervals from 5sec. to 120sec. The calibration curve of films was fitted with cubic function. The correlation between optical density and dose was Y=0.1405 X{sup 3}−2.916 X{sup 2}+25.566 x+2.238 (R{sup 2}=0.994). Based on the calibration curve, we calculated PDD in various filters depending on collimator size. When compared PDD of specific depth (3mm) considering animal size, the difference by collimator size was 4.50% in free filter and F1 was 1.53% and F2 was within 2.17%. Conclusion: We calculated PDD curve in small animal irradiator depending on the collimator size and the kind of filter using the radiochromic films. The various PDD curve was acquired and it was possible to irradiate various dose using these curve.« less

  9. WE-D-17A-06: Optically Stimulated Luminescence Detectors as ‘LET-Meters’ in Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granville, D; Sahoo, N; Sawakuchi, GO

    Purpose: To demonstrate and evaluate the potential of optically stimulated luminescence (OSL) detectors (OSLDs) for measurements of linear energy transfer (LET) in therapeutic proton beams. Methods: Batches of Al2O2:C OSLDs were irradiated with an absorbed dose of 0.2 Gy in un-modulated proton beams of varying LET (0.67 keV/μm to 2.58 keV/μm). The OSLDs were read using continuous wave (CW-OSL) and pulsed (P-OSL) stimulation modes. We parameterized and calibrated three characteristics of the OSL signals as functions of LET: CW-OSL curve shape, P-OSL curve shape and the ratio of the two OSL emission band intensities (ultraviolet/blue ratio). Calibration curves were createdmore » for each of these characteristics to describe their behaviors as functions of LET. The true LET values were determined using a validated Monte Carlo model of the proton therapy nozzle used to irradiate the OSLDs. We then irradiated batches of OSLDs with an absorbed dose of 0.2 Gy at various depths in two modulated proton beams (140 MeV, 4 cm wide spread-out Bragg peak (SOBP) and 250 MeV, 10 cm wide SOBP). The LET values were calculated using the OSL response and the calibration curves. Finally, measured LET values were compared to the true values determined using Monte Carlo simulations. Results: The CW-OSL curve shape, P-OSL curve shape and the ultraviolet/blue-ratio provided proton LET estimates within 12.4%, 5.7% and 30.9% of the true values, respectively. Conclusion: We have demonstrated that LET can be measured within 5.7% using Al2O3:C OSLDs in the therapeutic proton beams used in this investigation. From a single OSLD readout, it is possible to measure both the absorbed dose and LET. This has potential future applications in proton therapy quality assurance, particularly for treatment plans based on optimization of LET distributions. This research was partially supported by the Natural Sciences and Engineering Research Council of Canada.« less

  10. Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems

    PubMed Central

    Li, Zhining; Zhang, Yingtang; Yin, Gang

    2018-01-01

    The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544

  11. Determination of Flavonoids in Wine by High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    da Queija, Celeste; Queirós, M. A.; Rodrigues, Ligia M.

    2001-02-01

    The experiment presented is an application of HPLC to the analysis of flavonoids in wines, designed for students of instrumental methods. It is done in two successive 4-hour laboratory sessions. While the hydrolysis of the wines is in progress, the students prepare the calibration curves with standard solutions of flavonoids and calculate the regression lines and correlation coefficients. During the second session they analyze the hydrolyzed wine samples and calculate the concentrations of the flavonoids using the calibration curves obtained earlier. This laboratory work is very attractive to students because they deal with a common daily product whose components are reported to have preventive and therapeutic effects. Furthermore, students can execute preparative work and apply a more elaborate technique that is nowadays an indispensable tool in instrumental analysis.

  12. Light curves of flat-spectrum radio sources (Jenness+, 2010)

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Robson, E. I.; Stevens, J. A.

    2010-05-01

    Calibrated data for 143 flat-spectrum extragalactic radio sources are presented at a wavelength of 850um covering a 5-yr period from 2000 April. The data, obtained at the James Clerk Maxwell Telescope using the Submillimetre Common-User Bolometer Array (SCUBA) camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control - Data Reduction (ORAC-DR) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves. A re-analysis of previously published data from 1997 to 2000 is also presented. The combined catalogue, comprising 10493 flux density measurements, provides a unique and valuable resource for studies of extragalactic radio sources. (2 data files).

  13. Statistical behavior of ten million experimental detection limits

    NASA Astrophysics Data System (ADS)

    Voigtman, Edward; Abraham, Kevin T.

    2011-02-01

    Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.

  14. Curved-flow, rolling-flow, and oscillatory pure-yawing wind-tunnel test methods for determination of dynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Grafton, S. B.; Lutze, F. H.

    1981-01-01

    The test capabilities of the Stability Wind Tunnel of the Virginia Polytechnic Institute and State University are described, and calibrations for curved and rolling flow techniques are given. Oscillatory snaking tests to determine pure yawing derivatives are considered. Representative aerodynamic data obtained for a current fighter configuration using the curved and rolling flow techniques are presented. The application of dynamic derivatives obtained in such tests to the analysis of airplane motions in general, and to high angle of attack flight conditions in particular, is discussed.

  15. Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Moore, R. C.

    1985-01-01

    Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.

  16. Conversion of calibration curves for accurate estimation of molecular weight averages and distributions of polyether polyols by conventional size exclusion chromatography.

    PubMed

    Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M

    2018-08-17

    Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.

  17. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.

    PubMed

    Dos Santos, Serge; Prevorovsky, Zdenek

    2011-08-01

    Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Model development for MODIS thermal band electronic cross-talk

    NASA Astrophysics Data System (ADS)

    Chang, Tiejun; Wu, Aisheng; Geng, Xu; Li, Yonghong; Brinkmann, Jake; Keller, Graziela; Xiong, Xiaoxiong (Jack)

    2016-10-01

    MODerate-resolution Imaging Spectroradiometer (MODIS) has 36 bands. Among them, 16 thermal emissive bands covering a wavelength range from 3.8 to 14.4 μm. After 16 years on-orbit operation, the electronic crosstalk of a few Terra MODIS thermal emissive bands develop substantial issues which cause biases in the EV brightness temperature measurements and surface feature contamination. The crosstalk effects on band 27 with center wavelength at 6.7 μm and band 29 at 8.5 μm increased significantly in recent years, affecting downstream products such as water vapor and cloud mask. The crosstalk issue can be observed from nearly monthly scheduled lunar measurements, from which the crosstalk coefficients can be derived. Most of MODIS thermal bands are saturated at moon surface temperatures and the development of an alternative approach is very helpful for verification. In this work, a physical model was developed to assess the crosstalk impact on calibration as well as in Earth view brightness temperature retrieval. This model was applied to Terra MODIS band 29 empirically for correction of Earth brightness temperature measurements. In the model development, the detector nonlinear response is considered. The impacts of the electronic crosstalk are assessed in two steps. The first step consists of determining the impact on calibration using the on-board blackbody (BB). Due to the detector nonlinear response and large background signal, both linear and nonlinear coefficients are affected by the crosstalk from sending bands. The crosstalk impact on calibration coefficients was calculated. The second step is to calculate the effects on the Earth view brightness temperature retrieval. The effects include those from affected calibration coefficients and the contamination of Earth view measurements. This model links the measurement bias with crosstalk coefficients, detector nonlinearity, and the ratio of Earth measurements between the sending and receiving bands. The correction of the electronic crosstalk can be implemented empirically from the processed bias at different brightness temperature. The implementation can be done through two approaches. As routine calibration assessment for thermal infrared bands, the trending over select Earth scenes is processed for all the detectors in a band and the band averaged bias is derived for certain time. In this case, the correction of an affected band can be made using the regression of the model with band averaged bias and then corrections of detector differences are applied. The second approach requires the trending for individual detectors and the bias for each detector is used for regression with the model. A test using the first approach was made for Terra MODIS band 29 with the biases derived from long-term trending of sea surface temperature and Dome-C surface temperature.

  19. Germanium resistance thermometer calibration at superfluid helium temperatures

    NASA Technical Reports Server (NTRS)

    Mason, F. C.

    1985-01-01

    The rapid increase in resistance of high purity semi-conducting germanium with decreasing temperature in the superfluid helium range of temperatures makes this material highly adaptable as a very sensitive thermometer. Also, a germanium thermometer exhibits a highly reproducible resistance versus temperature characteristic curve upon cycling between liquid helium temperatures and room temperature. These two factors combine to make germanium thermometers ideally suited for measuring temperatures in many cryogenic studies at superfluid helium temperatures. One disadvantage, however, is the relatively high cost of calibrated germanium thermometers. In space helium cryogenic systems, many such thermometers are often required, leading to a high cost for calibrated thermometers. The construction of a thermometer calibration cryostat and probe which will allow for calibrating six germanium thermometers at one time, thus effecting substantial savings in the purchase of thermometers is considered.

  20. Research on Nonlinear Time Series Forecasting of Time-Delay NN Embedded with Bayesian Regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Weijin; Xu, Yusheng; Xu, Yuhui; Wang, Jianmin

    Based on the idea of nonlinear prediction of phase space reconstruction, this paper presented a time delay BP neural network model, whose generalization capability was improved by Bayesian regularization. Furthermore, the model is applied to forecast the imp&exp trades in one industry. The results showed that the improved model has excellent generalization capabilities, which not only learned the historical curve, but efficiently predicted the trend of business. Comparing with common evaluation of forecasts, we put on a conclusion that nonlinear forecast can not only focus on data combination and precision improvement, it also can vividly reflect the nonlinear characteristic of the forecasting system. While analyzing the forecasting precision of the model, we give a model judgment by calculating the nonlinear characteristic value of the combined serial and original serial, proved that the forecasting model can reasonably 'catch' the dynamic characteristic of the nonlinear system which produced the origin serial.

Top