Michałowska-Kaczmarczyk, Anna Maria; Asuero, Agustin G; Martin, Julia; Alonso, Esteban; Jurado, Jose Marcos; Michałowski, Tadeusz
2014-12-01
Rational functions of the Padé type are used for the calibration curve (CCM), and standard addition (SAM) methods purposes. In this paper, the related functions were applied to results obtained from the analyses of (a) nickel with use of FAAS method, (b) potassium according to FAES method, and (c) salicylic acid according to HPLC-MS/MS method. A uniform, integral criterion of nonlinearity of the curves, obtained according to CCM and SAM, is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area. Copyright © 2014 Elsevier B.V. All rights reserved.
Bias and nonlinearity of ultraviolet calibration curves measured using diode-array detectors
Dose, E.V.; Guiochon, G. Oak Ridge National Lab., TN )
1989-11-01
Models for the dependence of diode-array UV chromatographic detector response on bandpass and on the shape of the absorbing sample's spectrum are presented. The equations derived comprise terms describing two sources of non-ideal response due to the polychromatic nature of the detected radiation. The bias, or deviation at low concentrations of the measured absorbance from the ideal, zero-bandwidth value, increases roughly as the product of the spectrum's local second derivative and the square of the bandwidth. Calibration curve nonlinearity at higher concentrations, present for monochromator-based detectors and transmittance-averaging diode-array detectors, is described quantitatively. These equations confirm that the calibration curves always bend downward when the sample's absorption spectrum varies at all within the bandpass. A distinction is drawn between transmittance-averaging and absorbance-averaging diode-array detectors. Experimental results illustrate the types of bias and nonlinearity seen in each class at the high concentrations of interest to preparative-scale liquid chromatography and quality-control applications.
Michałowski, Tadeusz; Pilarski, Bogusław; Michałowska-Kaczmarczyk, Anna M; Kukwa, Agata
2014-06-01
Some rational functions of the Padé type, y=y(x; n,m), were applied to the calibration curve method (CCM), and compared with a parabolic function. The functions were tested on the results obtained from calibration of ion-selective electrodes: NH4-ISE, Ca-ISE, and F-ISE. A validity of the functions y=y(x; 2,1), y=y(x; 1,1), and y=y(x; 2,0) (parabolic) was compared. A uniform, integral criterion of nonlinearity of calibration curves is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area. Copyright © 2014 Elsevier B.V. All rights reserved.
Sie, Meng-Jie; Chen, Bud-Gen; Chang, Chiung Dan; Lin, Chia-Han; Liu, Ray H
2011-01-21
It is a common knowledge that detector fatigue causes a calibration curve to deviate from the preferred linear relationship at the higher concentration end. With the adaptation of an isotopically labeled analog of the analyte as the internal standard (IS), cross-contribution (CC) of the intensities monitored for the ions designating the analyte and the IS can also result in a non-linear relationship at both ends. A novel approach developed to assess 'the extent and the effect of [CC]… in quantitative GC-MS analysis' can be extended (a) to examine whether a specific set of CC values is accurate; and (b) to differentiate whether the observed non-linear calibration curve is caused by detector fatigue or the CC phenomenon. Data derived from the exemplar secobarbital (SB)/SB-d(5) system (as di-butyl-derivatives) are used to illustrate this novel approach. Comparing the non-linear nature of calibration data that are empirically observed to that derived from theoretical calculation (with the incorporation of adjustment resulting from the ion CC phenomenon), supports the conclusions that (a) both CC and detector fatigue contribute significantly to the observed non-linear nature of the calibration curve based on ion-pair m/z 207/212; and (b) detector fatigue is the dominating contributor when the calibration curve is based on ion-pair m/z 263/268.
Brousmiche, Sébastien; Souris, Kevin; Orban de Xivry, Jonathan; Lee, John Aldo; Macq, Benoit; Seco, Joao
2017-08-17
Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5\\% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1\\% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations. © 2017 Institute of Physics and Engineering in Medicine.
Variability among polysulphone calibration curves
NASA Astrophysics Data System (ADS)
Casale, G. R.; Borra, M.; Colosimo, A.; Colucci, M.; Militello, A.; Siani, A. M.; Sisto, R.
2006-09-01
Within an epidemiological study regarding the correlation between skin pathologies and personal ultraviolet (UV) exposure due to solar radiation, 14 field campaigns using polysulphone (PS) dosemeters were carried out at three different Italian sites (urban, semi-rural and rural) in every season of the year. A polysulphone calibration curve for each field experiment was obtained by measuring the ambient UV dose under almost clear sky conditions and the corresponding change in the PS film absorbance, prior and post exposure. Ambient UV doses were measured by well-calibrated broad-band radiometers and by electronic dosemeters. The dose-response relation was represented by the typical best fit to a third-degree polynomial and it was parameterized by a coefficient multiplying a cubic polynomial function. It was observed that the fit curves differed from each other in the coefficient only. It was assessed that the multiplying coefficient was affected by the solar UV spectrum at the Earth's surface whilst the polynomial factor depended on the photoinduced reaction of the polysulphone film. The mismatch between the polysulphone spectral curve and the CIE erythemal action spectrum was responsible for the variability among polysulphone calibration curves. The variability of the coefficient was related to the total ozone amount and the solar zenith angle. A mathematical explanation of such a parameterization was also discussed.
Nonlinear mechanics of rigidifying curves
NASA Astrophysics Data System (ADS)
Al Mosleh, Salem; Santangelo, Christian
2017-07-01
Thin shells are characterized by a high cost of stretching compared to bending. As a result isometries of the midsurface of a shell play a crucial role in their mechanics. In turn, curves on the midsurface with zero normal curvature play a critical role in determining the number and behavior of isometries. In this paper, we show how the presence of these curves results in a decrease in the number of linear isometries. Paradoxically, shells are also known to continuously fold more easily across these rigidifying curves than other curves on the surface. We show how including nonlinearities in the strain can explain these phenomena and demonstrate folding isometries with explicit solutions to the nonlinear isometry equations. In addition to explicit solutions, exact geometric arguments are given to validate and guide our analysis in a coordinate-free way.
Nonlinear Observers for Gyro Calibration
NASA Technical Reports Server (NTRS)
Thienel, Julie; Sanner, Robert M.
2003-01-01
Nonlinear observers for gyro calibration are presented. The first observer estimates a constant gyro bias. The second observer estimates scale factor errors. The third observer estimates the gyro alignment for three orthogonal gyros. The convergence properties of all three observers are discussed. Additionally, all three observers are coupled with a nonlinear control algorithm. The stability of each of the resulting closed loop systems is analyzed. Simulated test results are presented for each system.
Calibration of a detector for nonlinear responses.
Asnin, Leonid; Guiochon, Georges
2005-09-30
A calibration curve is often needed to derive from the record of the detector signal the actual concentration profile of the eluate in many studies of the thermodynamic and kinetic of adsorption by chromatography. The calibration task is complicated in the frequent cases in which the detector response is nonlinear. The simplest approach consists in preparing a series of solutions of known concentrations, in flushing them successively through the detector cell, and in recording the height of the plateau response obtained. However, this method requires relatively large amounts of the pure solutes studied. These are not always available, may be most costly, and could be applied to better uses. An alternative procedure consists of deriving this calibration curve from a series of peaks recorded upon the injection of increasingly large pulses of the studied compound. We validated this new method in HPLC with a UV detector. Questions concerning the reproducibility and accuracy of the method are discussed.
Nonlinear Observers for Gyro Calibration
NASA Technical Reports Server (NTRS)
Thienel, Julie; Sanner, Robert M.
2003-01-01
High precision estimation and control algorithms, to achieve unprecedented levels of pointing accuracy, will be required to support future formation flying missions such as interferometry missions. Achieving high pointing accuracy requires precise knowledge of the spacecraft rotation rate. Typically, the rotation rate is measured by a gyro. The measured rates can be corrupted by errors in alignment and scale factor, gyro biases, and noise. In this work, we present nonlinear observers for gyro calibration. Nonlinear observers are superior to extended or pseudo-linear Kalman filter type approaches for large errors and global stability. Three nonlinear gyro calibration observers are developed. The first observer estimates a constant gyro bias. The second observer estimates scale factor errors. The third observer estimates the gyro alignment for three orthogonal gyros. The convergence properties of all three observers are discussed. Additionally, all three observers are coupled with a nonlinear control algorithm. The stability of each of the resulting closed loop systems is analyzed. The observers are then combined, and the gyro calibration parameters are estimated simultaneously. The stability of the combined observers is addressed, as well as the stability of the resulting closed loop systems. Simulated test results are presented for each scenario. Finally, the nonlinear observers are compared to a pseudo-linear Kalman filter.
Appropriate calibration curve fitting in ligand binding assays.
Findlay, John W A; Dillard, Robert F
2007-06-29
Calibration curves for ligand binding assays are generally characterized by a nonlinear relationship between the mean response and the analyte concentration. Typically, the response exhibits a sigmoidal relationship with concentration. The currently accepted reference model for these calibration curves is the 4-parameter logistic (4-PL) model, which optimizes accuracy and precision over the maximum usable calibration range. Incorporation of weighting into the model requires additional effort but generally results in improved calibration curve performance. For calibration curves with some asymmetry, introduction of a fifth parameter (5-PL) may further improve the goodness of fit of the experimental data to the algorithm. Alternative models should be used with caution and with knowledge of the accuracy and precision performance of the model across the entire calibration range, but particularly at upper and lower analyte concentration areas, where the 4- and 5-PL algorithms generally outperform alternative models. Several assay design parameters, such as placement of calibrator concentrations across the selected range and assay layout on multiwell plates, should be considered, to enable optimal application of the 4- or 5-PL model. The fit of the experimental data to the model should be evaluated by assessment of agreement of nominal and model-predicted data for calibrators.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NASA Technical Reports Server (NTRS)
Demoss, J. F. (Compiler)
1971-01-01
Calibration curves for the Apollo 16 command service module pulse code modulation downlink and onboard display are presented. Subjects discussed are: (1) measurement calibration curve format, (2) measurement identification, (3) multi-mode calibration data summary, (4) pulse code modulation bilevel events listing, and (5) calibration curves for instrumentation downlink and meter link.
Calibration of a detector for nonlinear chromatography
Asnin, Leonid; Galinada, Wilmer; Gotmar, Gustaf; Guiochon, Georges A
2005-06-01
In many studies of nonlinear or preparative chromatography, chromatographic signals must be recorded for relatively concentrated solutions and the detectors, that are designed for analytical applications and are highly sensitive, must be used under such experimental conditions that their responses are often nonlinear. Then, a calibration curve is needed to derive the actual concentration profiles of the eluates from the measured detector response. It becomes necessary to derive a relationship between the concentration of the eluent and the detector signal at any given time. The simplest approach consists in preparing a series of solutions of known concentrations and in flushing them successively through the detector cell, recording the height of the plateau response obtained. However, this method requires relatively large amounts of the pure solutes being studied and these are not always available or they may be most costly, although these solutions can be recovered. We describe and validate an alternative procedure providing this calibration from a series of peaks recorded upon the injection of increasingly large pulses of the studied compound.
Calibration curves for some standard Gap Tests
Bowman, A.L.; Sommer, S.C.
1989-01-01
The relative shock sensitivities of explosive compositions are commonly assessed using a family of experiments that can be described by the generic term ''Gap Test.'' Gap tests include a donor charge, a test sample, and a spacer, or gap, between two explosives charges. The donor charge, gap material, and test dimensions are held constant within each different version of the gap test. The thickness of the gap is then varied to find the value at which 50% of the test samples will detonate. The gap tests measure the ease with a high-order detonation can be established in the test explosive, or the ''detonability,'' of the explosive. Test results are best reported in terms of the gap thickness at the 50% point. It is also useful to define the shock pressure transmitted into the test sample at the detonation threshold. This requires calibrating the gap test in terms of shock pressure in the gap as a function of the gap thickness. It also requires a knowledge of the shock Hugoniot of the sample explosive. We used the 2DE reactive hydrodynamic code with Forest Fire burn rates for the donor explosives to calculate calibration curves for several gap tests. The model calculations give pressure and particle velocity on the centerline of the experimental set-up and provide information about the curvature and pulse width of the shock wave. 10 refs., 1 fig.
Visualizing Nonlinear Narratives with Story Curves.
Kim, Nam Wook; Bach, Benjamin; Im, Hyejin; Schriber, Sasha; Gross, Markus; Pfister, Hanspeter
2017-08-29
In this paper, we present story curves, a visualization technique for exploring and communicating nonlinear narratives in movies. A nonlinear narrative is a storytelling device that portrays events of a story out of chronological order, e.g., in reverse order or going back and forth between past and future events. Many acclaimed movies employ unique narrative patterns which in turn have inspired other movies and contributed to the broader analysis of narrative patterns in movies. However, understanding and communicating nonlinear narratives is a difficult task due to complex temporal disruptions in the order of events as well as no explicit records specifying the actual temporal order of the underlying story. Story curves visualize the nonlinear narrative of a movie by showing the order in which events are told in the movie and comparing them to their actual chronological order, resulting in possibly meandering visual patterns in the curve. We also present Story Explorer, an interactive tool that visualizes a story curve together with complementary information such as characters and settings. Story Explorer further provides a script curation interface that allows users to specify the chronological order of events in movies. We used Story Explorer to analyze 10 popular nonlinear movies and describe the spectrum of narrative patterns that we discovered, including some novel patterns not previously described in the literature. Feedback from experts highlights potential use cases in screenplay writing and analysis, education and film production. A controlled user study shows that users with no expertise are able to understand visual patterns of nonlinear narratives using story curves.
Reduced Calibration Curve for Proton Computed Tomography
Yevseyeva, Olga; Assis, Joaquim de; Diaz, Katherin
2010-05-21
The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.
Nonlinear Growth Curves in Developmental Research
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki
2011-01-01
Developmentalists are often interested in understanding change processes, and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and…
Nonlinear Growth Curves in Developmental Research
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki
2011-01-01
Developmentalists are often interested in understanding change processes, and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and…
Psychophysical evaluation of calibration curve for diagnostic LCD monitor.
Uemura, Masanobu; Asai, Yoshiyuki; Yamaguchi, Michihiro; Fujita, Hideki; Shintani, Yuuko; Sanada, Shigeru
2006-12-01
In 1998, Digital Imaging Communications in Medicine (DICOM) proposed a calibration tool, the grayscale standard display function (GSDF), to obtain output consistency of radiographs. To our knowledge, there have been no previous reports of investigating the relation between perceptual linearity and detectability on a calibration curve. To determine a suitable calibration curve for diagnostic liquid crystal display (LCD) monitors, the GSDF and Commission Internationale de l'Eclairage (CIE) curves were compared using psychophysical gradient delta and receiver operating characteristic (ROC) analysis for clinical images. We succeeded in expressing visually recognized contrast directly using delta instead of the just noticeable difference (JND) index of the DICOM standard. As a result, we found that the visually recognized contrast at low luminance areas on the LCD monitor calibrated by the CIE curve is higher than that calibrated by the GSDF curve. On the ROC analysis, there was no significant difference in tumor detectability between GSDF and CIE curves for clinical thoracic images. However, the area parameter Az of the CIE curve is superior to that of the GSDF curve. The detectability of tumor shadows in the thoracic region on clinical images using the CIE curve was superior to that using the GSDF curve owing to the high absolute value of delta in the low luminance range. We conclude that the CIE curve is the most suitable tool for calibrating diagnostic LCD monitors, rather than the GSDF curve.
Nonlinear Growth Curves in Developmental Research
Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki
2011-01-01
Developmentalists are often interested in understanding change processes and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and asymptotic levels can be estimated. A variety of growth models are described beginning with the linear growth model and moving to nonlinear models of varying complexity. A detailed discussion of nonlinear models is provided, highlighting the added insights into complex developmental processes associated with their use. A collection of growth models are fit to repeated measures of height from participants of the Berkeley Growth and Guidance Studies from early childhood through adulthood. PMID:21824131
Error Modeling and Confidence Interval Estimation for Inductively Coupled Plasma Calibration Curves.
1987-02-01
confidence interval estimation for multiple use of the calibration curve is...calculate weights for the calibration curve fit. Multiple and single-use confidence interval estimates are obtained and results along the calibration curve are
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-10-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about -0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-01-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about –0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%. PMID:23724368
Calibrating Curved Crystals Used for Plasma Spectroscopy
Haugh, M. J., Jacoby, K. D., Ross, P. W., Rochau, G. Wu, M., Regan, S. P., Barrios, M. A.
2012-10-29
The throughput and resolving power of an X-ray spectrometer that uses a curved crystal as the diffraction element is determined primarily by the crystal X-ray reflectivity properties. This poster presents a measurement technique for these crystal parameters using a simple diode source to produce a narrow spectral band. The results from measurements on concave elliptical polyethylene terephthalate (PET) crystals and convex potassium acid phthalate (KAP) crystals show large variations in the key parameters compared to those from the flat crystal.
Finite element model calibration of a nonlinear perforated plate
NASA Astrophysics Data System (ADS)
Ehrhardt, David A.; Allen, Matthew S.; Beberniss, Timothy J.; Neild, Simon A.
2017-03-01
This paper presents a case study in which the finite element model for a curved circular plate is calibrated to reproduce both the linear and nonlinear dynamic response measured from two nominally identical samples. The linear dynamic response is described with the linear natural frequencies and mode shapes identified with a roving hammer test. Due to the uncertainty in the stiffness characteristics from the manufactured perforations, the linear natural frequencies are used to update the effective modulus of elasticity of the full order finite element model (FEM). The nonlinear dynamic response is described with nonlinear normal modes (NNMs) measured using force appropriation and high speed 3D digital image correlation (3D-DIC). The measured NNMs are used to update the boundary conditions of the full order FEM through comparison with NNMs calculated from a nonlinear reduced order model (NLROM). This comparison revealed that the nonlinear behavior could not be captured without accounting for the small curvature of the plate from manufacturing as confirmed in literature. So, 3D-DIC was also used to identify the initial static curvature of each plate and the resulting curvature was included in the full order FEM. The updated models are then used to understand how the stress distribution changes at large response amplitudes providing a possible explanation of failures observed during testing.
Usefulness of information criteria for the selection of calibration curves.
Rozet, E; Ziemons, E; Marini, R D; Hubert, Ph
2013-07-02
The reliability of analytical results obtained with quantitative analytical methods is highly dependent on the selection of the adequate model used as the calibration curve. To select the adequate response function or model the most used and known parameter is to determine the coefficient R(2). However, it is well-known that it suffers many inconveniences, such as leading to overfitting the data. A proposed solution is to use the adjusted determination coefficient R(adj)(2) that aims at reducing this problem. However, there is another family of criteria that exists to allow the selection of an adequate model: the information criteria AIC, AICc, and BIC. These criteria have rarely been used in analytical chemistry to select the adequate calibration curve. This works aims at assessing the performance of the statistical information criteria as well as R(2) and R(adj)(2) for the selection of an adequate calibration curve. They are applied to several analytical methods covering liquid chromatographic methods, as well as electrophoretic ones involved in the analysis of active substances in biological fluids or aimed at quantifying impurities in drug substances. In addition, Monte Carlo simulations are performed to assess the efficacy of these statistical criteria to select the adequate calibration curve.
Nonlinear bulging factor based on R-curve data
NASA Technical Reports Server (NTRS)
Jeong, David Y.; Tong, Pin
1994-01-01
In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.
Nonlinear bulging factor based on R-curve data
NASA Astrophysics Data System (ADS)
Jeong, David Y.; Tong, Pin
1994-09-01
In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.
Nonlinear normal modes modal interactions and isolated resonance curves
Kuether, Robert J.; Renson, L.; Detroux, T.; ...
2015-05-21
The objective of the present study is to explore the connection between the nonlinear normal modes of an undamped and unforced nonlinear system and the isolated resonance curves that may appear in the damped response of the forced system. To this end, an energy balance technique is used to predict the amplitude of the harmonic forcing that is necessary to excite a specific nonlinear normal mode. A cantilever beam with a nonlinear spring at its tip serves to illustrate the developments. Furthermore, the practical implications of isolated resonance curves are also discussed by computing the beam response to sine sweepmore » excitations of increasing amplitudes.« less
Nonlinear normal modes modal interactions and isolated resonance curves
Kuether, Robert J.; Renson, L.; Detroux, T.; Grappasonni, C.; Kerschen, G.; Allen, M. S.
2015-05-21
The objective of the present study is to explore the connection between the nonlinear normal modes of an undamped and unforced nonlinear system and the isolated resonance curves that may appear in the damped response of the forced system. To this end, an energy balance technique is used to predict the amplitude of the harmonic forcing that is necessary to excite a specific nonlinear normal mode. A cantilever beam with a nonlinear spring at its tip serves to illustrate the developments. Furthermore, the practical implications of isolated resonance curves are also discussed by computing the beam response to sine sweep excitations of increasing amplitudes.
A Bayesian approach for estimating calibration curves and unknown concentrations in immunoassays.
Feng, Feng; Sales, Ana Paula; Kepler, Thomas B
2011-03-01
Immunoassays are primary diagnostic and research tools throughout the medical and life sciences. The common approach to the processing of immunoassay data involves estimation of the calibration curve followed by inversion of the calibration function to read off the concentration estimates. This approach, however, does not lend itself easily to acceptable estimation of confidence limits on the estimated concentrations. Such estimates must account for uncertainty in the calibration curve as well as uncertainty in the target measurement. Even point estimates can be problematic: because of the non-linearity of calibration curves and error heteroscedasticity, the neglect of components of measurement error can produce significant bias. We have developed a Bayesian approach for the estimation of concentrations from immunoassay data that treats the propagation of measurement error appropriately. The method uses Markov Chain Monte Carlo (MCMC) to approximate the posterior distribution of the target concentrations and numerically compute the relevant summary statistics. Software implementing the method is freely available for public use. The new method was tested on both simulated and experimental datasets with different measurement error models. The method outperformed the common inverse method on samples with large measurement errors. Even in cases with extreme measurements where the common inverse method failed, our approach always generated reasonable estimates for the target concentrations. Project name: Baecs; Project home page: www.computationalimmunology.org/utilities/; Operating systems: Linux, MacOS X and Windows; Programming language: C++; License: Free for Academic Use.
Identification of systems containing nonlinear stiffnesses using backbone curves
NASA Astrophysics Data System (ADS)
Londoño, Julián M.; Cooper, Jonathan E.; Neild, Simon A.
2017-02-01
This paper presents a method for the dynamic identification of structures containing discrete nonlinear stiffnesses. The approach requires the structure to be excited at a single resonant frequency, enabling measurements to be made in regimes of large displacements where nonlinearities are more likely to be significant. Measured resonant decay data is used to estimate the system backbone curves. Linear natural frequencies and nonlinear parameters are identified using these backbone curves assuming a form for the nonlinear behaviour. Numerical and experimental examples, inspired by an aerospace industry test case study, are considered to illustrate how the method can be applied. Results from these models demonstrate that the method can successfully deliver nonlinear models able to predict the response of the test structure nonlinear dynamics.
Brien, William F; Crawford, Linda; Raby, Anne; Richardson, Harold
2004-03-01
The international normalized ratio (INR) has been used since 1983 to standardize prothrombin time results for patients on oral anticoagulants. However, significant interlaboratory variations have been noted. Attempts have been made to address these differences with the use of instrument-specific International Sensitivity Index (ISI) values and in-house calibration of ISI values. To assess the performance of laboratories using a calibration curve for INR testing. Attempts to improve performance of the INR include the use of instrument-specific ISI values, model-specific ISI values, in-house calibration of ISI values, and more recently, the preparation of a calibration curve. Several studies have shown an improvement in performance using these procedures. In this study of licensed laboratories performing routine coagulation testing in the Province of Ontario, Canada, the determination of the INR by a calibration curve was compared with the laboratories' usual method of assessment. These methods were subsequently analyzed by comparing the results to instrument-specific ISI, model-specific ISI, and in-house calibrators. International normalized ratios derived by both methods were analyzed for accuracy and precision. The stability of a calibration curve was also investigated. Performance of INR testing has improved with use of a calibration curve or in-house calibrators. The results confirm that either in-house calibrators or the calibration curve improve performance of INR testing. The calibration curve may be easier to use and appears stable up to 4 months.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2011-07-01
The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied
LAMOST Spectrograph Response Curves: Stability and Application to Flux Calibration
NASA Astrophysics Data System (ADS)
Du, Bing; Luo, A.-Li; Kong, Xiao; Zhang, Jian-Nan; Guo, Yan-Xin; Cook, Neil James; Hou, Wen; Yang, Hai-Feng; Li, Yin-Bi; Song, Yi-Han; Chen, Jian-Jun; Zuo, Fang; Wu, Ke-Fei; Wang, Meng-Xin; Wu, Yue; Wang, You-Fen; Zhao, Yong-Heng
2016-12-01
The task of flux calibration for Large sky Area Multi-Object Spectroscopic Telescope (LAMOST) spectra is difficult due to many factors, such as the lack of standard stars, flat-fielding for large field of view, and variation of reddening between different stars, especially at low Galactic latitudes. Poor selection, bad spectral quality, or extinction uncertainty of standard stars not only might induce errors to the calculated spectral response curve (SRC) but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude | b| ≥slant 60^\\circ and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee that the selected stars had been observed by each fiber, we selected 37,931 high-quality exposures of 29,000 stars from LAMOST DR2, and more than seven exposures for each fiber. We calculated the SRCs for each fiber for each exposure and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable, with statistical errors ≤10%. From the comparison between each ASPSRC and the SRCs for the same spectrograph obtained by the 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra that were abandoned by the LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with the Sloan Digital Sky Survey (SDSS), the relative flux differences between SDSS spectra and LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration.
Tani, Hidenori; Kanagawa, Takahiro; Morita, Nao; Kurata, Shinya; Nakamura, Kazunori; Tsuneda, Satoshi; Noda, Naohiro
2007-10-01
We have developed a simple quantitative method for specific nucleic acid sequences without using calibration curves. This method is based on the combined use of competitive polymerase chain reaction (PCR) and fluorescence quenching. We amplified a gene of interest (target) from DNA samples and an internal standard (competitor) with a sequence-specific fluorescent probe using PCR and measured the fluorescence intensities before and after PCR. The fluorescence of the probe is quenched on hybridization with the target by guanine bases, whereas the fluorescence is not quenched on hybridization with the competitor. Therefore, quench rate (i.e., fluorescence intensity after PCR divided by fluorescence intensity before PCR) is always proportional to the ratio of the target to the competitor. Consequently, we can calculate the ratio from quench rate without using a calibration curve and then calculate the initial copy number of the target from the ratio and the initial copy number of the competitor. We successfully quantified the copy number of a recombinant DNA of genetically modified (GM) soybean and estimated the GM soybean contents. This method will be particularly useful for rapid field tests of the specific gene contamination in samples.
Evaluation of nonlinear calibration on the satellite TIR image applications
NASA Astrophysics Data System (ADS)
Lu, Feng; Zhang, Xiaohu; Wu, Xiao; Cui, Peng; Xu, Na
2014-11-01
Using Global Space-based Inter-Calibration System (GSICS), the thermal infrared (TIR) channel was calibrated with high precision. During this procedure, a new calibration table was made, and the sensor non-linear effect was corrected. During the TIR channel image remapping and Level2 (L2) dataset generation procedure, the bilinear interpolation was widely used. Most of the L2 data was stored in D/N count with corresponding calibration table, which assumes D/N count is linear. But in the real world, the non-linear D/N count which comes from imprecise modeled A/D transformation of instrument sensor, will lead to the temperature bias on L2 dataset, even though the high precision calibration Look up Table (LUT) was regenerated. In this paper, D/N bias comes from the mapping process was diagnosed, with the consideration of the temperature difference between neighbor pixels.
Towards a North Atlantic Marine Radiocarbon Calibration Curve
NASA Astrophysics Data System (ADS)
Austin, William; Reimer, Paula; Blaauw, Maarten; Bryant, Charlotte; Rae, James; Burke, Andrea
2015-04-01
Service du dejeuner! Twenty years ago, in 1995, I sailed as a post-doctoral researcher based at the University of Edinburgh (UK) on the first scientific mission of the new Marion Dufresne II. In this presentation, I will provide an update on the work that first quantified North Atlantic marine radiocarbon reservoir ages, highlighting how advances in marine tephrochronology over the last twenty years have significantly improved our understanding (and ability to test) land-ice-ocean linkages. The mechanistic link that connects marine radiocarbon reservoir ages to ocean ventilation state will also be discussed with reference to the Younger Dryas climate anomaly, where models and data have been successfully integrated. I will discuss the use of reference chronologies in the North Atlantic region and evaluate the common practice of climate synchronization between the Greenland ice cores and some of the key MD records that are now available. The exceptional quality of the MD giant piston cores and their potential to capture high-resolution last glacial sediment records from the North Atlantic provides an exciting opportunity to build new regional marine radiocarbon calibration curves. I will highlight new efforts by my co-authors and others to build such curves, setting-out a new agenda for the next twenty years of the IMAGES programme.
A Bayesian approach for estimating calibration curves and unknown concentrations in immunoassays
Feng, Feng; Sales, Ana Paula; Kepler, Thomas B.
2011-01-01
Motivation: Immunoassays are primary diagnostic and research tools throughout the medical and life sciences. The common approach to the processing of immunoassay data involves estimation of the calibration curve followed by inversion of the calibration function to read off the concentration estimates. This approach, however, does not lend itself easily to acceptable estimation of confidence limits on the estimated concentrations. Such estimates must account for uncertainty in the calibration curve as well as uncertainty in the target measurement. Even point estimates can be problematic: because of the non-linearity of calibration curves and error heteroscedasticity, the neglect of components of measurement error can produce significant bias. Methods: We have developed a Bayesian approach for the estimation of concentrations from immunoassay data that treats the propagation of measurement error appropriately. The method uses Markov Chain Monte Carlo (MCMC) to approximate the posterior distribution of the target concentrations and numerically compute the relevant summary statistics. Software implementing the method is freely available for public use. Results: The new method was tested on both simulated and experimental datasets with different measurement error models. The method outperformed the common inverse method on samples with large measurement errors. Even in cases with extreme measurements where the common inverse method failed, our approach always generated reasonable estimates for the target concentrations. Availability: Project name: Baecs; Project home page: www.computationalimmunology.org/utilities/; Operating systems: Linux, MacOS X and Windows; Programming language: C++; License: Free for Academic Use. Contact: feng.feng@duke.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21149344
Nonlinear dynamical modes of climate variability: from curves to manifolds
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander
2016-04-01
The necessity of efficient dimensionality reduction methods capturing dynamical properties of the system from observed data is evident. Recent study shows that nonlinear dynamical mode (NDM) expansion is able to solve this problem and provide adequate phase variables in climate data analysis [1]. A single NDM is logical extension of linear spatio-temporal structure (like empirical orthogonal function pattern): it is constructed as nonlinear transformation of hidden scalar time series to the space of observed variables, i. e. projection of observed dataset onto a nonlinear curve. Both the hidden time series and the parameters of the curve are learned simultaneously using Bayesian approach. The only prior information about the hidden signal is the assumption of its smoothness. The optimal nonlinearity degree and smoothness are found using Bayesian evidence technique. In this work we do further extension and look for vector hidden signals instead of scalar with the same smoothness restriction. As a result we resolve multidimensional manifolds instead of sum of curves. The dimension of the hidden manifold is optimized using also Bayesian evidence. The efficiency of the extension is demonstrated on model examples. Results of application to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510
Assessment of the calibration curve for transmittance pulse-oximetry
NASA Astrophysics Data System (ADS)
Doronin, A.; Fine, I.; Meglinski, I.
2011-11-01
Optical/laser modalities provide a broad variety of practical solutions for clinical diagnostics and therapy in a range from imaging of single cells and molecules to non-invasive biopsy of specific biological tissues and organs tomography. Near-infrared transmittance pulse oximetry with laser diodes is the accepted standard in current clinical practice and widely used for noninvasive monitoring of oxygen saturation in arterial blood hemoglobin. Conceptual design of practical pulse oximetry systems requires careful selection of various technical parameters, including intensity, wavelength, beam size and profile of incident laser radiation, size, numerical aperture of the detector, as well as a clear understanding of how the spatial and temporal structural alterations in biological tissues can be linked with and can be distinguished by variations of these parameters. In current letter utilizing state-of-the-art NVIDEA CUDA technology, a new object oriented programming paradigm and on-line solutions we introduce a computational tool applied for human finger transmittance spectra simulation and assessment of calibration curve for near-infrared transmitted pulseoximetry.
Nonlinear Least Squares Curve Fitting with Microsoft Excel Solver
NASA Astrophysics Data System (ADS)
Harris, Daniel C.
1998-01-01
"Solver" is a powerful tool in the Microsoft Excel spreadsheet that provides a simple means of fitting experimental data to nonlinear functions. The procedure is so easy to use and its mode of operation is so obvious that it is excellent for students to learn the underlying principle of lease squares curve fitting. This article introduces the method of fitting nonlinear functions with Solver and extends the treatment to weighted least squares and to the estimation of uncertainties in the least-squares parameters.
Classical black holes: the nonlinear dynamics of curved spacetime.
Thorne, Kip S
2012-08-03
Numerical simulations have revealed two types of physical structures, made from curved spacetime, that are attached to black holes: tendexes, which stretch or squeeze anything they encounter, and vortexes, which twist adjacent inertial frames relative to each other. When black holes collide, their tendexes and vortexes interact and oscillate (a form of nonlinear dynamics of curved spacetime). These oscillations generate gravitational waves, which can give kicks up to 4000 kilometers per second to the merged black hole. The gravitational waves encode details of the spacetime dynamics and will soon be observed and studied by the Laser Interferometer Gravitational Wave Observatory and its international partners.
Mohammadi, Gholamreza Fallah; Alam, Nader Riyahi; Rezaeejam, Hamed; Pourfallah, Tayyeb Allahverdi; Zakariaee, Seyed Salman
2015-01-01
In radiation treatments, estimation of the dose distribution in the target volume is one of the main components of the treatment planning procedure. To estimate the dose distribution, the information of electron densities is necessary. The standard curves determined by computed tomography (CT) scanner that may be different from that of other oncology centers. In this study, the changes of dose calculation due to the different calibration curves (HU-ρel) were investigated. Dose values were calculated based on the standard calibration curve that was predefined for the treatment planning system (TPS). The calibration curve was also extracted from the CT images of the phantom, and dose values were calculated based on this curve. The percentage errors of the calculated values were determined. The statistical analyses of the mean differences were performed using the Wilcoxon rank-sum test for both of the calibration curves. The results show no significant difference for both of the measured and standard calibration curves (HU-ρel) in 6, 15, and 18 MeV energies. In Wilcoxon ranked sum nonparametric test for independent samples with P<0.05, the equality of monitor units for both of the curves to transfer 200 cGy doses to reference points was resulted. The percentage errors of the calculated values were lower than 2% and 1.5% in 6 and 15 MeV, respectively. From the results, it could be concluded that the standard calibration curve could be used in TPS dose calculation accurately.
Nonlinear vibrations of functionally graded doubly curved shallow shells
NASA Astrophysics Data System (ADS)
Alijani, F.; Amabili, M.; Karagiozis, K.; Bakhtiari-Nejad, F.
2011-03-01
Nonlinear forced vibrations of FGM doubly curved shallow shells with a rectangular base are investigated. Donnell's nonlinear shallow-shell theory is used and the shell is assumed to be simply supported with movable edges. The equations of motion are reduced using the Galerkin method to a system of infinite nonlinear ordinary differential equations with quadratic and cubic nonlinearities. Using the multiple scales method, primary and subharmonic resonance responses of FGM shells are fully discussed and the effect of volume fraction exponent on the internal resonance conditions, softening/hardening behavior and bifurcations of the shallow shell when the excitation frequency is (i) near the fundamental frequency and (ii) near two times the fundamental frequency is shown. Moreover, using a code based on arclength continuation method, a bifurcation analysis is carried out for a special case with two-to-one internal resonance between the first and second doubly symmetric modes with respect to the panel's center ( ω13≈2 ω11). Bifurcation diagrams and Poincaré maps are obtained through direct time integration of the equations of motion and chaotic regions are shown by calculating Lyapunov exponents and Lyapunov dimension.
Flow of viscous fluid along a nonlinearly stretching curved surface
NASA Astrophysics Data System (ADS)
Sanni, K. M.; Asghar, S.; Jalil, M.; Okechi, N. F.
This paper focuses on the flow of viscous fluid over a curved surface stretching with nonlinear power-law velocity. The boundary layer equations are transformed into ordinary differential equations using suitable non-dimensional transformations. These equations are solved numerically using shooting and Runge-Kutta (RK) methods. The impact of non-dimensional radius of curvature and power-law indices on the velocity field, the pressure and the skin friction coefficient are investigated. The results deduced for linear stretching are compared with the published work to validate the numerical procedure. The important findings are: (a) Slight variation of the curvature of the stretching sheet increases the velocity and the skin friction coefficient significantly. (b) The nonlinearity of the stretching velocity increases the skin friction. (c) The results for linear stretching and the flat surface are the special cases of this problem.
Nonlinear Observers for Gyro Calibration Coupled with a Nonlinear Control Algorithm
NASA Technical Reports Server (NTRS)
Thienel, Julie; Sanner, Robert M.
2003-01-01
Nonlinear observers for gyro calibration are presented. The first observer estimates a constant gyro bias. The second observer estimates scale factor errors. The third observer estimates the gyro alignment for three orthogonal gyros. The observers are then combined. The convergence properties of all three observers, and the combined observers, are discussed. Additionally, all three observers are coupled with a nonlinear control algorithm. The stability of each of the resulting closed loop systems is analyzed. Simulated test results are presented for each system.
Hybrid analytical technique for the nonlinear analysis of curved beams
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.
1992-01-01
The application of a two-step hybrid technique to the geometrically nonlinear analysis of curved beams is used to demonstrate the potential of hybrid analytical techniques in nonlinear structural mechanics. The hybrid technique is based on successive use of the perturbation method and a classical direct variational procedure. The functions associated with the various-order terms in the perturbation expansion of the fundamental unknowns, and their sensitivity derivatives with respect to material and geometric parameters of the beam, are first obtained by using the perturbation method. These functions are selected as coordinate functions (or modes) and the classical direct variational technique is then used to compute their amplitudes. The potential of the proposed hybrid technique for nonlinear analysis of structures is discussed. The effectiveness of the hybrid technique is demonstrated by means of numerical examples. The symbolic computation system Mathematica is used in the present study. The tasks performed on Mathematica include: (1) generation of algebraic expressions for the perturbation functions of the different response quantities and their sensitivity derivatives: and (2) determination of the radius of convergence of the perturbation series.
Non-linear curve fitting using Microsoft Excel solver.
Walsh, S; Diamond, D
1995-04-01
Solver, an analysis tool incorporated into Microsoft Excel V 5.0 for Windows, has been evaluated for solving non-linear equations. Test and experimental data sets have been processed, and the results suggest that solver can be successfully used for modelling data obtained in many analytical situations (e.g. chromatography and FIA peaks, fluorescence decays and ISE response characteristics). The relatively simple user interface, and the fact that Excel is commonly bundled free with new PCs makes it an ideal tool for those wishing to experiment with solving non-linear equations without having to purchase and learn a completely new package. The dynamic display of the iterative search process enables the user to monitor location of the optimum solution by the search algorithm. This, together with the almost universal availability of Excel, makes solver an ideal vehicle for teaching the principles of iterative non-linear curve fitting techniques. In addition, complete control of the modelling process lies with the user, who must present the raw data and enter the equation of the model, in contrast to many commercial packages bundled with instruments which perform these operations with a 'black-box' approach.
Dependency of EBT2 film calibration curve on postirradiation time.
Chang, Liyun; Ho, Sheng-Yow; Ding, Hueisch-Jy; Lee, Tsair-Fwu; Chen, Pang-Yu
2014-02-01
The Ashland Inc. product EBT2 film model is a widely used quality assurance tool, especially for verification of 2-dimensional dose distributions. In general, the calibration film and the dose measurement film are irradiated, scanned, and calibrated at the same postirradiation time (PIT), 1-2 days after the films are irradiated. However, for a busy clinic or in some special situations, the PIT for the dose measurement film may be different from that of the calibration film. In this case, the measured dose will be incorrect. This paper proposed a film calibration method that includes the effect of PIT. The dose versus film optical density was fitted to a power function with three parameters. One of these parameters was PIT dependent, while the other two were found to be almost constant with a standard deviation of the mean less than 4%. The PIT-dependent parameter was fitted to another power function of PIT. The EBT2 film model was calibrated using the PDD method with 14 different PITs ranging from 1 h to 2 months. Ten of the fourteen PITs were used for finding the fitting parameters, and the other four were used for testing the model. The verification test shows that the differences between the delivered doses and the film doses calculated with this modeling were mainly within 2% for delivered doses above 60 cGy, and the total uncertainties were generally under 5%. The errors and total uncertainties of film dose calculation were independent of the PIT using the proposed calibration procedure. However, the fitting uncertainty increased with decreasing dose or PIT, but stayed below 1.3% for this study. The EBT2 film dose can be modeled as a function of PIT. For the ease of routine calibration, five PITs were suggested to be used. It is recommended that two PITs be located in the fast developing period (1 ∼ 6 h), one in 1 ∼ 2 days, one around a week, and one around a month.
Dependency of EBT2 film calibration curve on postirradiation time
Chang, Liyun Ding, Hueisch-Jy; Ho, Sheng-Yow; Lee, Tsair-Fwu; Chen, Pang-Yu
2014-02-15
Purpose: The Ashland Inc. product EBT2 film model is a widely used quality assurance tool, especially for verification of 2-dimensional dose distributions. In general, the calibration film and the dose measurement film are irradiated, scanned, and calibrated at the same postirradiation time (PIT), 1-2 days after the films are irradiated. However, for a busy clinic or in some special situations, the PIT for the dose measurement film may be different from that of the calibration film. In this case, the measured dose will be incorrect. This paper proposed a film calibration method that includes the effect of PIT. Methods: The dose versus film optical density was fitted to a power function with three parameters. One of these parameters was PIT dependent, while the other two were found to be almost constant with a standard deviation of the mean less than 4%. The PIT-dependent parameter was fitted to another power function of PIT. The EBT2 film model was calibrated using the PDD method with 14 different PITs ranging from 1 h to 2 months. Ten of the fourteen PITs were used for finding the fitting parameters, and the other four were used for testing the model. Results: The verification test shows that the differences between the delivered doses and the film doses calculated with this modeling were mainly within 2% for delivered doses above 60 cGy, and the total uncertainties were generally under 5%. The errors and total uncertainties of film dose calculation were independent of the PIT using the proposed calibration procedure. However, the fitting uncertainty increased with decreasing dose or PIT, but stayed below 1.3% for this study. Conclusions: The EBT2 film dose can be modeled as a function of PIT. For the ease of routine calibration, five PITs were suggested to be used. It is recommended that two PITs be located in the fast developing period (1∼6 h), one in 1 ∼ 2 days, one around a week, and one around a month.
An analysis of calibration curve models for solid-state heat-flow calorimeters
Hypes, P. A.; Bracken, D. S.; McCabe, G.
2001-01-01
Various calibration curve models for solid-state calorimeters are compared to determine which model best fits the calibration data. The calibration data are discussed. The criteria used to select the best model are explained. A conclusion regarding the best model for the calibration curve is presented. These results can also be used to evaluate the random and systematic error of a calorimetric measurement. A linear/quadratic model has been used for decades to fit the calibration curves for wheatstone bridge calorimeters. Excellent results have been obtained using this calibration curve model. The Multical software package uses this model for the calibration curve. The choice of this model is supported by 40 years [1] of calorimeter data. There is good empirical support for the linear/quadratic model. Calorimeter response is strongly linear. Calorimeter sensitivity is slightly lower at higher powers; the negative coefficient of the x{sup 2} term accounts for this. The solid-state calorimeter is operated using the Multical [2] software package. An investigation was undertaken to determine if the linear/quadratic model is the best model for the new sensor technology used in the solid-state calorimeter.
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-05-01
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
Nonlinear problems of the theory of heterogeneous slightly curved shells
NASA Technical Reports Server (NTRS)
Kantor, B. Y.
1973-01-01
An account if given of the variational method of the solution of physically and geometrically nonlinear problems of the theory of heterogeneous slightly curved shells. Examined are the bending and supercritical behavior of plates and conical and spherical cupolas of variable thickness in a temperature field, taking into account the dependence of the elastic parameters on temperature. The bending, stability in general and load-bearing capacity of flexible isotropic elastic-plastic shells with different criteria of plasticity, taking into account compressibility and hardening. The effect of the plastic heterogeneity caused by heat treatment, surface work hardening and irradiation by fast neutron flux is investigated. Some problems of the dynamic behavior of flexible shells are solved. Calculations are performed in high approximations. Considerable attention is given to the construction of a machine algorithm and to the checking of the convergence of iterative processes.
Nonlinear problems of the theory of heterogeneous slightly curved shells
NASA Technical Reports Server (NTRS)
Kantor, B. Y.
1973-01-01
An account if given of the variational method of the solution of physically and geometrically nonlinear problems of the theory of heterogeneous slightly curved shells. Examined are the bending and supercritical behavior of plates and conical and spherical cupolas of variable thickness in a temperature field, taking into account the dependence of the elastic parameters on temperature. The bending, stability in general and load-bearing capacity of flexible isotropic elastic-plastic shells with different criteria of plasticity, taking into account compressibility and hardening. The effect of the plastic heterogeneity caused by heat treatment, surface work hardening and irradiation by fast neutron flux is investigated. Some problems of the dynamic behavior of flexible shells are solved. Calculations are performed in high approximations. Considerable attention is given to the construction of a machine algorithm and to the checking of the convergence of iterative processes.
Development of a robust calibration model for nonlinear in-line process data
Despagne; Massart; Chabot
2000-04-01
A comparative study involving a global linear method (partial least squares), a local linear method (locally weighted regression), and a nonlinear method (neural networks) has been performed in order to implement a calibration model on an industrial process. The models were designed to predict the water content in a reactor during a distillation process, using in-line measurements from a near-infrared analyzer. Curved effects due to changes in temperature and variations between the different batches make the problem particularly challenging. The influence of spectral range selection and data preprocessing has been studied. With each calibration method, specific procedures have been applied to promote model robustness. In particular, the use of a monitoring set with neural networks does not always prevent overfitting. Therefore, we developed a model selection criterion based on the determination of the median of monitoring error over replicate trials. The back-propagation neural network models selected were found to outperform the other methods on independent test data.
NASA Astrophysics Data System (ADS)
Rest, Armin; Hilbert, Bryan; Leisenring, Jarron M.; Misselt, Karl; Rieke, Marcia; Robberto, Massimo
2016-07-01
Conversion gain is a basic detector property which relates the raw counts in a pixel in data numbers (DN) to the number of electrons detected. The standard method for determining the gain is called the Photon Transfer Curve (PTC) method and involves the measurement the change in variance as a function of signal level. For non-linear IR detectors, this method depends strongly on the non-linearity correction and is therefore susceptible to systematic biases due to calibration issues. We have developed a new, robust, and fast method, the differential Photon Transfer Curve (dPTC) method, which is independent of non-linearity corrections, but still delivers gain values similar in precision but higher in accuracy.
A new form of the calibration curve in radiochromic dosimetry. Properties and results.
Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio
2016-07-01
This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer-Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time
Common Envelope Light Curves. I. Grid-code Module Calibration
NASA Astrophysics Data System (ADS)
Galaviz, Pablo; De Marco, Orsola; Passy, Jean-Claude; Staff, Jan E.; Iaconi, Roberto
2017-04-01
The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been put forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8 M ⊙ red giant branch star interacts with a 0.6 M ⊙ companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
NASA Astrophysics Data System (ADS)
Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.
2011-08-01
While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future.
Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.
2011-01-01
While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Merz, Bruno
2017-04-01
This study uses a novel method for calibrating a glacio-hydrological model based on hydrograph partitioning curves (HPC), and evaluates its value in comparison to multi-criteria optimization approaches which use glacier mass balance, satellite snow cover images and discharge. The HPCs are extracted from the observed flow hydrographs using additionally catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the start and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that 1) the HPC-based method guarantees model-internal consistency comparable to the multi-criteria calibration methods; 2) the HPC-based method presents higher parameter identifiability and improves the stability of calibrated parameter values across various calibration periods; and 3) the HPC-based method outperforms the other calibration methods in simulating the share of groundwater, as well as in reproducing the seasonal dynamics of snow and glacier melt. Our findings indicate the potential of HPCs to substitute multi-criteria methods for hydrological model calibration in glacierized basins where other data than discharge are often not available or very costly to obtain.
Effects of experimental design on calibration curve precision in routine analysis
Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.
1998-01-01
A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816
NASA Astrophysics Data System (ADS)
Graham, Hannah Robyn
In order to be able to qualify and quantify radiation exposure in terms of dose, a Fastscan whole body counter must be calibrated correctly. Current calibration methods do not take the full range of body types into consideration when creating efficiency curve calibrations. The goal of this work is the creation of a Monte Carlo (MCNP) model, that allows the simulation of efficiency curves for a diverse population of subjects. Models were created for both the Darlington and the Pickering Fastscan WBCs, and the simulations were benchmarked against experimental results with good agreement. The Pickering Fastscan was found to have agreement to within +/-9%, and the Darlington Fastscan had agreement to within +/-11%. Further simulations were conducted to investigate the effects of increased body fat on the detected activity, as well as locating the position of external contamination using front/back ratios of activity. Simulations were also conducted to create efficiency calibrations that had good agreement with the manufacturer's efficiency curves. The work completed in this thesis can be used to create efficiency calibration curves for unique body compositions in the future.
Calibration and efficiency curve of SANAEM ionization chamber for activity measurements.
Yeltepe, Emin; Kossert, Karsten; Dirican, Abdullah; Nähle, Ole; Niedergesäß, Christiane; Kemal Şahin, Namik
2016-03-01
A commercially available Fidelis ionization chamber was calibrated and assessed in PTB with activity standard solutions. The long-term stability and linearity of the system was checked. Energy-dependent efficiency curves for photons and beta particles were determined, using an iterative method in Excel™, to enable calibration factors to be calculated for radionuclides which were not used in the calibration. Relative deviations between experimental and calculated radionuclide efficiencies are of the order of 1% for most photon emitters and below 5% for pure beta emitters. The system will enable TAEK-SANAEM to provide traceable activity measurements. Copyright © 2015 Elsevier Ltd. All rights reserved.
NSLS-II: Nonlinear Model Calibration for Synchrotrons
Bengtsson, J.
2010-10-08
This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al
Split calibration curve: an approach to avoid repeat analysis of the samples exceeding ULOQ.
Basu, Sudipta; Basit, Abdul; Ravindran, Selvan; Patel, Vandana B; Vangala, Subrahmanyam; Patel, Hitesh
2012-10-01
The current practice of using calibration curves with narrow concentration ranges during bioanalysis of new chemical entities has some limitations and is time consuming. In the present study we describe a split calibration curve approach, where sample dilution and repeat analysis can be avoided without compromising the quality and integrity of the data obtained. A split calibration curve approach is employed to determine the drug concentration in plasma samples with accuracy and precision over a wide dynamic range of approximately 0.6 to 15,000 ng/ml for dapsone and approximately 1 to 25,000 ng/ml for cyclophosphamide and glipizide. A wide dynamic range of concentrations for these three compounds was used in the current study to construct split calibration curves and was successfully validated for sample analysis in a single run. Using this method, repeat analysis of samples can be avoided. This is useful for the bioanalysis of toxicokinetic studies with wide dose ranges and studies where the sample volume is limited.
Guo, Longhua; Kim, Dong-Hwan
2011-07-07
We demonstrate plasmonic aptasensors that allow a single nanoparticle (NP) to generate a calibration curve and to detect analytes. The proposed reusable aptasensors have significant advantages over conventional single-NP based assays in terms of sensitivity and reproducibility. This journal is © The Royal Society of Chemistry 2011
JPSS-1 VIIRS DNB nonlinearity and its impact on SDR calibration
NASA Astrophysics Data System (ADS)
Lee, Shihyan; Wang, Wenhui; Cao, Changyong
2015-09-01
During JPSS-1 VIIRS testing at Raytheon El Segundo, a larger than expected radiometric response nonlinearity was discovered in Day-Nigh Band (DNB). In addition, the DNB nonlinearity is aggregation mode dependent, where the most severe non-linear behavior are the aggregation modes used at high scan angles (<~50 degree). The DNB aggregation strategy was subsequently modified to remove modes with the most significant non-linearity. We characterized the DNB radiometric response using pre-launch tests with the modified aggregation strategy. The test data show the DNB non-linearity varies at each gain stages, detectors and aggregation modes. The non-linearity is most significant in the Low Gain Stage (LGS) and could vary from sample-to-sample. The non-linearity is also more significant in EV than in calibration view samples. The HGS nonlinearity is difficult to quantify due to the higher uncertainty in determining source radiance. Since the radiometric response non-linearity is most significant at low dn ranges, it presents challenge in DNB cross-stage calibration, an critical path to calibration DNB's High Gain Stage (HGS) for nighttime imagery. Based on the radiometric characterization, we estimated the DNB on-orbit calibration accuracy and compared the expected DNB calibration accuracy using operational calibration approaches. The analysis showed the non-linearity will result in cross-stage gain ratio bias, and have the most significant impact on HGS. The HGS calibration accuracy can be improved when either SD data or only the more linearly behaved EV pixels are used in cross-stage calibration. Due to constrain in test data, we were not able to achieve a satisfactory accuracy and uniformity for the JPSS-1 DNB nighttime imagery quality. The JPSS-1 DNB nonlinearity is a challenging calibration issue which will likely require special attention after JPSS-1 launch.
CABRI Reactor: The fast neutron Hodoscope Calibration curves calculation with MORET
NASA Astrophysics Data System (ADS)
Bernard, Franck; Chevalier, Vincent; Venanzi, Damiano
2014-06-01
This poster presents the Hodoscope calibration curves calculation with 3D Monte Carlo code MORET. The fast neutron hodoscope is a facility of the CABRI research reactor at Cadarache (FRANCE). This hodoscope is designed to measure the fuel motion during a RIA in a pressurized water reactor. The fuel motion is measured by counting fast fission neutrons emerging from the test fuel placed in an experimental loop functioning like a Pressurized Water Reactor (T=300°C and P=155 bar), at the center of the CABRI core. The detection system of the hodoscope measures a signal which is a function of the fuel motion. The calibration curves allow then to convert the signal in a fuel mass. In order to calculate these curves, we have developed a method based on a Monte Carlo calculation code.
De Sanctis, S; De Amicis, A; Di Cristofaro, S; Franchini, V; Regalbuto, E; Mammana, G; Lista, F
2014-06-01
The cytokinesis-block micronucleus assay in peripheral blood lymphocytes is one of the best standardized and validated techniques for individual radiation dose assessment. This method has been proposed as an alternative to the dicentric chromosome assay, which is considered the "gold standard" in biological dosimetry because it requires less time and cytogenetic expertise. Nevertheless, for application as a biodosimetry tool in large-scale nuclear or radiological accidents, the manually performed cytokinesis-block micronucleus assay needs further strategies (e.g., the automation of micronucleus scoring) to speed up the analysis. An essential prerequisite for radiation dose assessment is to establish a dose-effect curve. In this study, blood samples of one healthy subject were irradiated with seven increasing doses of x-ray (240 kVp, 1 Gy min⁻¹) ranging from 0.25-4.0 Gy to generate calibration curves based on manual as well as on automated scoring mode. The quality of the calibration curves was evaluated by determination of the dose prediction accuracy after the analysis of 10 blood samples from the same donor exposed to unknown radiation doses. The micronucleus frequencies in binucleated cells were scored manually as well as automatically and were used to assess the absorbed radiation doses with reference to the respective calibration curve. The accuracy of the dose assessment based on manual and automatic scoring mode was compared.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
Nonlinear Latent Curve Models for Multivariate Longitudinal Data
ERIC Educational Resources Information Center
Blozis, Shelley A.; Conger, Katherine J.; Harring, Jeffrey R.
2007-01-01
Latent curve models have become a useful approach to analyzing longitudinal data, due in part to their allowance of and emphasis on individual differences in features that describe change. Common applications of latent curve models in developmental studies rely on polynomial functions, such as linear or quadratic functions. Although useful for…
Linear and Nonlinear Anderson Localization in a Curved Potential
NASA Astrophysics Data System (ADS)
Claudio, Conti
2014-03-01
Disorder induced localization in the presence of nonlinearity and curvature is investigated. The time-resolved three-dimensional expansion of a wave packet in a bent cigar shaped potential with a focusing Kerr-like interaction term and Gaussian disorder is numerically analyzed. A self-consistent analytical theory, in which randomness, nonlinearity and geometry are determined by a single scaling parameter, is reported, and it is shown that curvature enhances localization.
We characterize the sensitivity of the ozone attributable health burden assessment with respect to different modeling strategies of concentration-response function. For this purpose, we develop a flexible Bayesian hierarchical model allowing for a nonlinear ozone risk curve with ...
We characterize the sensitivity of the ozone attributable health burden assessment with respect to different modeling strategies of concentration-response function. For this purpose, we develop a flexible Bayesian hierarchical model allowing for a nonlinear ozone risk curve with ...
Note: curve fit models for atomic force microscopy cantilever calibration in water.
Kennedy, Scott J; Cole, Daniel G; Clark, Robert L
2011-11-01
Atomic force microscopy stiffness calibrations performed on commercial instruments using the thermal noise method on the same cantilever in both air and water can vary by as much as 20% when a simple harmonic oscillator model and white noise are used in curve fitting. In this note, several fitting strategies are described that reduce this difference to about 11%. © 2011 American Institute of Physics
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
NASA Astrophysics Data System (ADS)
Zou, Yuan; Shen, Tianxing
2013-03-01
Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.
NASA Astrophysics Data System (ADS)
Jepsen, S. M.; Harmon, T. C.; Shi, Y.
2016-04-01
Calibration of watershed models to the shape of the base flow recession curve is a way to capture the important relationship between groundwater discharge and subsurface water storage in a catchment. In some montane Mediterranean regions, such as the midelevation Providence Creek catchment in the southern Sierra Nevada of California (USA), nearly all base flow recession occurs after snowmelt, and during this time evapotranspiration (ET) usually exceeds base flow. We assess the accuracy to which watershed models can be calibrated to ET-dominated base flow recession in Providence Creek, both in terms of fitting a discharge time-series and realistically capturing the observed discharge-storage relationship for the catchment. Model parameters estimated from calibrations to ET-dominated recession are compared to parameters estimated from reference calibrations to base flow recession with ET-effects removed ("potential recession"). We employ the Penn State Integrated Hydrologic Model (PIHM) for simulations of base flow and ET, and methods that are otherwise general in nature. In models calibrated to ET-dominated recession, simulation errors in ET and the targeted relationship for recession (-dQ/dt versus Q) contribute substantially (up to 57% and 46%, respectively) to overestimates in the discharge-storage differential, defined as d(lnQ)/dS, relative to that derived from water flux observations. These errors result in overestimates of deep-subsurface hydraulic conductivity in models calibrated to ET-dominated recession, by up to an order of magnitude, relative to reference calibrations to potential recession. These results illustrate a potential opportunity for improving model representation of discharge-storage dynamics by calibrating to the shape of base flow recession after removing the complicating effects of ET.
NASA Astrophysics Data System (ADS)
Kilian, Gladiné; Pieter, Muyshondt; Joris, Dirckx
2016-06-01
Laser Doppler Vibrometry is an intrinsic highly linear measurement technique which makes it a great tool to measure extremely small nonlinearities in the vibration response of a system. Although the measurement technique is highly linear, other components in the experimental setup may introduce nonlinearities. An important source of artificially introduced nonlinearities is the speaker, which generates the stimulus. In this work, two correction methods to remove the effects of stimulus nonlinearity are investigated. Both correction methods were found to give similar results but have different pros and cons. The aim of this work is to investigate the importance of the conical shape of the eardrum as a source of nonlinearity in hearing. We present measurements on flat and indented membranes. The data shows that the curved membrane exhibit slightly higher levels of nonlinearity compared to the flat membrane.
NASA Astrophysics Data System (ADS)
Cigeroglu, Ender; Samandari, Hamed
2014-11-01
Nonlinear free vibration analysis of curved double-walled carbon nanotubes (DWNTs) embedded in an elastic medium is studied in this study. Nonlinearities considered are due to large deflection of carbon nanotubes (geometric nonlinearity) and nonlinear interlayer van der Waals forces between inner and outer tubes. The differential quadrature method (DQM) is utilized to discretize the partial differential equations of motion in spatial domain, which resulted in a nonlinear set of algebraic equations of motion. The effect of nonlinearities, different end conditions, initial curvature, and stiffness of the surrounding elastic medium, and vibrational modes on the nonlinear free vibration of DWCNTs is studied. Results show that it is possible to detect different vibration modes occurring at a single vibration frequency when CNTs vibrate in the out-of-phase vibration mode. Moreover, it is observed that boundary conditions have significant effect on the nonlinear natural frequencies of the DWCNT including multiple solutions.
Pearcey solitons in curved nonlinear photonic caustic lattices
NASA Astrophysics Data System (ADS)
Zannotti, A.; Rüschenbaum, M.; Denz, C.
2017-09-01
Controlling artificial Pearcey and swallowtail beams allows the realization of caustic lattices in nonlinear photosensitive media at very low light intensities. We examine their functionality as 2D and 3D waveguiding structures and show the potential of exploiting these lattices as linear beam splitters, which we name a ‘Pearcey-Y-splitter’. For symmetrized Pearcey beams as auto-focusing beams, the formation of solitons in focusing nonlinearity is observed. Our original approach represents the first realization of caustic photonic lattices and can directly be applied in signal processing, microscopy and material lithography.
A weakly nonlinear theory for wave-vortex interactions in curved channel flow
NASA Technical Reports Server (NTRS)
Singer, Bart A.; Erlebacher, Gordon; Zang, Thomas A.
1992-01-01
A weakly nonlinear theory is developed to study the interaction of Tollmien-Schlichting (TS) waves and Dean vortices in curved channel flow. The predictions obtained from the theory agree well with results obtained from direct numerical simulations of curved channel flow, especially for low amplitude disturbances. Some discrepancies in the results of a previous theory with direct numerical simulations are resolved.
Light bullets in nonlinear periodically curved waveguide arrays
Matuszewski, Michal; Garanovich, Ivan L.; Sukhorukov, Andrey A.
2010-04-15
We predict that stable mobile spatiotemporal solitons can exist in arrays of periodically curved optical waveguides. We find two-dimensional light bullets in planar arrays with harmonic waveguide bending and three-dimensional bullets in square lattices with helical waveguide bending using variational formalism. Stability of the light-bullet solutions is confirmed by the direct numerical simulations which show that the light bullets can freely move across the curved arrays. This mobility property is a distinguishing characteristic compared to previously considered discrete light bullets which were trapped to a specific lattice site. These results suggest new possibilities for flexible spatiotemporal manipulation of optical pulses in photonic lattices.
Calibration for nonlinear mixed effects models: an application to the withdrawal time prediction.
Concordet, D; Nunez, O G
2000-12-01
We propose calibration methods for nonlinear mixed effects models. Using an estimator whose asymptotic properties are known, four different statistics are used to perform the calibration. Simulations are carried out to compare the performance of these statistics. Finally, the milk discard time prediction of an antibiotic, which has motivated this study, is performed on real data.
A new calibration curve for carbonate clumped isotope thermometer of land snail shells (aragonite)
NASA Astrophysics Data System (ADS)
Zhang, N.; Yamada, K.; Yoshida, N.
2013-12-01
Clumped isotope data (Δ47) of carbonate is considered as a useful tool to reflect both the temperature and oxygen isotopic composition of water where the carbonate grew [1]. Zarrur et al. reported the relationship between snail shell calcification temperatures and the mean annual/ activity season ambient temperatures based on a calibration curve established by Ghosh et al. [2]. However, the clumped isotope temperature is always higher than the environment temperature. For better understanding this phenomenon, we present a new empirical calibration curve based on land snail shells (aragonite) cultured in the controlled temperature environment. In 2012, we cultured the land snails ';Euhadra' which were collected from Yokohama, Japan. They were cultured from eggs to adults around 6-8 months under the temperature of 20°, 25° and 30°, respectively. Each temperature group contains 15-20 snails. All of them were fed by cabbages during their life span. To study the effect of ingested carbonate, some of them were fed by Ca3(PO4)2 powder while others were fed by CaCO3 powder. Clumped isotope data for all samples were analyzed by a Thermo Finnigan MAT 253 Mass Spectrometer and calibrated by an ';absolute reference frame' [3]. We found an empirical linear relationship between Δ47 and controlled ambient temperature, which is slightly deviated from the published theoretical and experimental calibration curves based on both inorganic and biogenic materials. We will discuss the potential controlling factors caused this kind of deviation combine with the land snail growth environment. [1] Ghosh et al., 2006, Geochimica et Cosmochimica Acta. 70, 1439-1456 [2] Zaarur et al. 2011. Geochimica et Cosmochimica Acta, 75, 6859-6869 [3] Dennis et al., 2011. Geochimica et Cosmochimica Acta 75, 7117-7131
Experimental Study on Nonlinear Vibrations of Fixed-Fixed Curved Beams
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Patel, B. P.
2016-07-01
Nonlinear dynamic behavior of fixed-fixed shallow and deep curved beams is studied experimentally using non-contact type of electromagnetic shaker and acceleration measurements. The frequency response obtained from acceleration measurements is found to be in fairly good agreement with the computational response. The travellingwave phenomenon along with participation of higher harmonics and softening nonlinearity are observed. The experimental results on the internal resonance of curved beams due to direct excitation of anti-symmetric mode are reported for the first time. The deep curved beam depicts chaotic response at higher excitation amplitude.
Perspectives on Geometrodynamics: The Nonlinear Dynamics of Curved Spacetime
NASA Astrophysics Data System (ADS)
Thorne, Kip S.
2012-03-01
In the 1950s John Archibald Wheeler exhorted his students and colleagues to explore ``Geometrodynamics,'' i.e. the dynamical behavior of curved spacetime, as predicted by Einstein's general relativity theory. Unfortunately, the research tools of that era were inadequate for the task. This has changed over the past ten years and will change further in the coming decade, thanks to two new sets of tools - numerical relativity, and gravitational wave observations, coupled to theory. In this lecture, I will review the progress and prospects for geometrodynamics, focusing especially on: 1. Geometrodynamics near singularities, 2. Geometrodynamics triggered by colliding black holes, 3. Geometrodynamics triggered by black-string instabilities in four space dimensions, and 4. Preparations for observing the dynamics of curved spacetime with interferometric gravitational wave detectors: LIGO and its international partners.
NASA Astrophysics Data System (ADS)
Sun, Limin; Chen, Lin
2017-10-01
Residual mode correction is found crucial in calibrating linear resonant absorbers for flexible structures. The classic modal representation augmented with stiffness and inertia correction terms accounting for non-resonant modes improves the calibration accuracy and meanwhile avoids complex modal analysis of the full system. This paper explores the augmented modal representation in calibrating control devices with nonlinearity, by studying a taut cable attached with a general viscous damper and its Equivalent Dynamic Systems (EDSs), i.e. the augmented modal representations connected to the same damper. As nonlinearity is concerned, Frequency Response Functions (FRFs) of the EDSs are investigated in detail for parameter calibration, using the harmonic balance method in combination with numerical continuation. The FRFs of the EDSs and corresponding calibration results are then compared with those of the full system documented in the literature for varied structural modes, damper locations and nonlinearity. General agreement is found and in particular the EDS with both stiffness and inertia corrections (quasi-dynamic correction) performs best among available approximate methods. This indicates that the augmented modal representation although derived from linear cases is applicable to a relatively wide range of damper nonlinearity. Calibration of nonlinear devices by this means still requires numerical analysis while the efficiency is largely improved owing to the system order reduction.
PV Degradation Curves: Non-Linearities and Failure Modes
Jordan, Dirk C.; Silverman, Timothy J.; Sekulic, Bill; Kurtz, Sarah R.
2016-09-03
Photovoltaic (PV) reliability and durability have seen increased interest in recent years. Historically, and as a preliminarily reasonable approximation, linear degradation rates have been used to quantify long-term module and system performance. The underlying assumption of linearity can be violated at the beginning of the life, as has been well documented, especially for thin-film technology. Additionally, non-linearities in the wear-out phase can have significant economic impact and appear to be linked to different failure modes. In addition, associating specific degradation and failure modes with specific time series behavior will aid in duplicating these degradation modes in accelerated tests and, eventually, in service life prediction. In this paper, we discuss different degradation modes and how some of these may cause approximately linear degradation within the measurement uncertainty (e.g., modules that were mainly affected by encapsulant discoloration) while other degradation modes lead to distinctly non-linear degradation (e.g., hot spots caused by cracked cells or solder bond failures and corrosion). The various behaviors are summarized with the goal of aiding in predictions of what may be seen in other systems.
Determination of Tafel Constants in Nonlinear Polarization Curves.
1987-12-01
di. . (S1.± -S 1 )/6h% bx = Sx /2 cz = ((yl.. - y 1 )/h1 ) - ((2h 1 S1 + h1 Sx .-)/6) dL = Yz The resulting system of n - 2 equations in St involves n...pairs of data points in order to generate the required number of equations. To arrive at the two.% additional equations to solve for Sx and S...constraints are , specified which pertain to the conditions at the ends of the curves. The three choices for the end conditions are: 0% 1. Sx = S, = w, hich
Mano, Shuhei; Suto, Yumiko
2014-11-01
The dicentric chromosome assay (DCA) is one of the most sensitive and reliable methods of inferring doses of radiation exposure in patients. In DCA, one calibration curve is prepared in advance by in vitro irradiation to blood samples from one or sometimes multiple healthy donors in considering possible inter-individual variability. Although the standard method has been demonstrated to be quite accurate for actual dose estimates, it cannot account for random effects, which come from such as the blood donor used to prepare the calibration curve, the radiation-exposed patient, and the examiners. To date, it is unknown how these random effects impact on the standard method of dose estimation. We propose a novel Bayesian hierarchical method that incorporates random effects into the dose estimation. To demonstrate dose estimation by the proposed method and to assess the impact of inter-individual variability in samples from multiple donors on the estimation, peripheral blood samples from 13 occupationally non-exposed, non-smoking, healthy individuals were collected and irradiated with gamma rays. The results clearly showed significant inter-individual variability and the standard method using a sample from a single donor gave anti-conservative confidence interval of the irradiated dose. In contrast, the Bayesian credible interval for irradiated dose calculated by the proposed method using samples from multiple donors properly covered the actual doses. Although the classical confidence interval of calibration curve with accounting inter-individual variability in samples from multiple donors was roughly coincident with the Bayesian credible interval, the proposed method has better reasoning and potential for extensions.
High electric field measurement with slab coupled optical sensors using nonlinear calibration
NASA Astrophysics Data System (ADS)
Stan, Nikola; Shumway, Legrand; Seng, Frederick; King, Rex; Selfridge, Richard; Schultz, Stephen
2015-05-01
We describe the application of SCOS technology in non-intrusive, directional and spatially localized measurements of high electric fields. When measuring electric fields above a certain threshold, SCOS measurement sensitivity starts varying to a great extent and the linear approximation that assumes sensitivity to be constant breaks down. This means that a comprehensive nonlinear calibration method is required for accurate calibration of both low and high electric fields, while linear calibration can only be accurately applied for low fields. Nonlinear calibration method relies on the knowledge of the variability of sensitivity, while linear calibration relies on approximation of sensitivity with a constant value, which breaks down for high fields. We analyze and compare the two calibration methods by applying them to a same set of measurements. We measure electric field pulses with magnitudes from 1 MV/m to 8.2 MV/m, with sub-300 ns rise time and fall-off time constant of 60 μs. We show that the nonlinear calibration very accurately predicts all measured fields, both high and low, while the linear calibration becomes increasingly inaccurate for fields above 1 MV/m.
Nonlinearities and adaptation of color vision from sequential principal curves analysis.
Laparra, Valero; Jiménez, Sandra; Camps-Valls, Gustavo; Malo, Jesús
2012-10-01
Mechanisms of human color vision are characterized by two phenomenological aspects: the system is nonlinear and adaptive to changing environments. Conventional attempts to derive these features from statistics use separate arguments for each aspect. The few statistical explanations that do consider both phenomena simultaneously follow parametric formulations based on empirical models. Therefore, it may be argued that the behavior does not come directly from the color statistics but from the convenient functional form adopted. In addition, many times the whole statistical analysis is based on simplified databases that disregard relevant physical effects in the input signal, as, for instance, by assuming flat Lambertian surfaces. In this work, we address the simultaneous statistical explanation of the nonlinear behavior of achromatic and chromatic mechanisms in a fixed adaptation state and the change of such behavior (i.e., adaptation) under the change of observation conditions. Both phenomena emerge directly from the samples through a single data-driven method: the sequential principal curves analysis (SPCA) with local metric. SPCA is a new manifold learning technique to derive a set of sensors adapted to the manifold using different optimality criteria. Here sequential refers to the fact that sensors (curvilinear dimensions) are designed one after the other, and not to the particular (eventually iterative) method to draw a single principal curve. Moreover, in order to reproduce the empirical adaptation reported under D65 and A illuminations, a new database of colorimetrically calibrated images of natural objects under these illuminants was gathered, thus overcoming the limitations of available databases. The results obtained by applying SPCA show that the psychophysical behavior on color discrimination thresholds, discount of the illuminant, and corresponding pairs in asymmetric color matching emerge directly from realistic data regularities, assuming no a priori
Construction of Calibration Curve for Premature Chromosome Condensation Assay for Dose Assessment
Neronova, Elizaveta G.
2016-01-01
Cytogenetic dosimetry plays an important role in the triage and medical management of affected people in radiological incidents/accidents. Cytogenetic biodosimetry uses different methods to estimate the absorbed dose in the exposed individuals, and each approach has its advantages and disadvantages. Premature chromosome condensation (PCC) assay presents several advantages that hopefully fulfill the gaps identified in the other cytogenetic methods. To introduce this technique into the panel of other cytogenetic methods, a calibration curve for PCC after γ-irradiation was generated for our laboratory. PMID:28217285
Construction of Calibration Curve for Premature Chromosome Condensation Assay for Dose Assessment.
Neronova, Elizaveta G
2016-01-01
Cytogenetic dosimetry plays an important role in the triage and medical management of affected people in radiological incidents/accidents. Cytogenetic biodosimetry uses different methods to estimate the absorbed dose in the exposed individuals, and each approach has its advantages and disadvantages. Premature chromosome condensation (PCC) assay presents several advantages that hopefully fulfill the gaps identified in the other cytogenetic methods. To introduce this technique into the panel of other cytogenetic methods, a calibration curve for PCC after γ-irradiation was generated for our laboratory.
Dating the time of birth: A radiocarbon calibration curve for human eye-lens crystallines
NASA Astrophysics Data System (ADS)
Kjeldsen, Henrik; Heinemeier, Jan; Heegaard, Steffen; Jacobsen, Christina; Lynnerup, Niels
2010-04-01
Radiocarbon bomb-pulse dating has been used to measure the formation age of human eye-lens crystallines. Lens crystallines are special proteins in the eye-lens that consist of virtually inert tissue. The experimental data show that the radiocarbon ages to a large extent reflect the time of birth, in accordance with expectations. Moreover, it has been possible to develop an age model for the formation of the eye-lens crystallines. From this model a radiocarbon calibration curve for lens crystallines has been calculated. As a consequence, the time of birth of humans can be determined with an accuracy of a few years by radiocarbon dating.
NASA Astrophysics Data System (ADS)
Sikorska, Anna E.; Renard, Benjamin
2017-07-01
Hydrological models are typically calibrated with discharge time series derived from a rating curve, which is subject to parametric and structural uncertainties that are usually neglected. In this work, we develop a Bayesian approach to probabilistically represent parametric and structural rating curve errors in the calibration of hydrological models. To achieve this, we couple the hydrological model with the inverse rating curve yielding the rainfall-stage model that is calibrated in stage space. Acknowledging uncertainties of the hydrological and the rating curve models allows assessing their contribution to total uncertainties of stages and discharges. Our results from a case study in France indicate that (a) ignoring rating curve uncertainty leads to changes in hydrological parameters, and (b) structural uncertainty of hydrological model dominates other uncertainty sources. The paper ends with discussing key challenges that remain to be addressed to achieve a meaningful quantification of various uncertainty sources that affect hydrological model, as including input errors.
Sivaganesan, Mano; Seifring, Shawn; Varma, Manju; Haugland, Richard A; Shanks, Orin C
2008-02-25
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignored in calibration calculations and in some cases impossible to characterize. A flexible statistical method that can account for uncertainty between plasmid and genomic DNA targets, replicate testing, and experiment-to-experiment variability is needed to estimate calibration curve parameters such as intercept and slope. Here we report the use of a Bayesian approach to generate calibration curves for the enumeration of target DNA from genomic DNA samples using absolute plasmid DNA standards. Instead of the two traditional methods (classical and inverse), a Monte Carlo Markov Chain (MCMC) estimation was used to generate single, master, and modified calibration curves. The mean and the percentiles of the posterior distribution were used as point and interval estimates of unknown parameters such as intercepts, slopes and DNA concentrations. The software WinBUGS was used to perform all simulations and to generate the posterior distributions of all the unknown parameters of interest. The Bayesian approach defined in this study allowed for the estimation of DNA concentrations from environmental samples using absolute standard curves generated by real-time qPCR. The approach accounted for uncertainty from multiple sources such as experiment-to-experiment variation, variability between replicate measurements, as well as uncertainty introduced when employing calibration curves generated from absolute plasmid DNA standards.
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
Yang, Weichun; Sun, Xiuhua; Wang, Hsiang-Yu; Woolley, Adam T
2009-10-01
Detection and accurate quantitation of biomarkers such as alpha-fetoprotein (AFP) can be a key aspect of early stage cancer diagnosis. Microfluidic devices provide attractive analysis capabilities, including low sample and reagent consumption, as well as short assay times. However, to date microfluidic analyzers have relied almost exclusively on calibration curves for sample quantitation, which can be problematic for complex mixtures such as human serum. We have fabricated integrated polymer microfluidic systems that can quantitatively determine fluorescently labeled AFP in human serum using either the method of standard addition or a calibration curve. Our microdevices couple an immunoaffinity purification step with rapid microchip electrophoresis separation in a laser-induced fluorescence detection system, all under automated voltage control in a miniaturized polymer microchip. In conjunction with laser-induced fluorescence detection, these systems can quantify AFP at approximately 1 ng/mL levels in approximately 10 microL of human serum in a few tens of minutes. Our polymer microdevices have been applied in determining AFP in spiked serum samples. These integrated microsystems offer excellent potential for rapid, simple, and accurate biomarker quantitation in a point-of-care setting.
Effect of downscaling on the linearity range of a calibration curve in spectrofluorimetry.
Kwapiszewski, Radoslaw; Szczudlowska, Justyna; Kwapiszewska, Karina; Dybko, Artur; Brzozka, Zbigniew
2014-07-01
Interest in the microfluidic environment, owing to its unique physical properties, is increasing in much innovative chemical, biological, or medicinal research. The possibility of exploiting and using new phenomena makes the microscale a powerful tool to improve currently used macroscopic methods and approaches. Previously, we reported that an increase in the surface area to volume ratio of a measuring cell could provide a wider linear range for fluorescein (Kwapiszewski et al., Anal. Bioanal. Chem. 403:151-155, 2012). Here, we present a broader study in this field to confirm the assumptions we presented before. We studied fluorophores with a large and a small Stokes shift using a standard cuvette and fabricated microfluidic detection cells having different surface area to volume ratios. We analyzed the effect of different configurations of the detection cell on the measured fluorescence signal. We also took into consideration the effect of concentration on the emission spectrum, and the effect of the surface area to volume ratio on the limit of linearity of the response of the selected fluorophores. We observed that downscaling, leading to an increase in the probability of collisions between molecules and cell walls with no energy transfer, results in an increase in the limit of linearity of the calibration curve of fluorophores. The results obtained suggest that microfluidic systems can be an alternative to the currently used approaches for widening the linearity of a calibration curve. Therefore, microsystems can be useful for studies of optically dense samples and samples that should not be diluted.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran; Lung, Shun-Fat
2017-01-01
For shape predictions of structures under large geometrically nonlinear deformations, Curved Displacement Transfer Functions were formulated based on a curved displacement, traced by a material point from the undeformed position to deformed position. The embedded beam (depth-wise cross section of a structure along a surface strain-sensing line) was discretized into multiple small domains, with domain junctures matching the strain-sensing stations. Thus, the surface strain distribution could be described with a piecewise linear or a piecewise nonlinear function. The discretization approach enabled piecewise integrations of the embedded-beam curvature equations to yield the Curved Displacement Transfer Functions, expressed in terms of embedded beam geometrical parameters and surface strains. By entering the surface strain data into the Displacement Transfer Functions, deflections along each embedded beam can be calculated at multiple points for mapping the overall structural deformed shapes. Finite-element linear and nonlinear analyses of a tapered cantilever tubular beam were performed to generate linear and nonlinear surface strains and the associated deflections to be used for validation. The shape prediction accuracies were then determined by comparing the theoretical deflections with the finiteelement- generated deflections. The results show that the newly developed Curved Displacement Transfer Functions are very accurate for shape predictions of structures under large geometrically nonlinear deformations.
Nonlinear model calibration of a shear wall building using time and frequency data features
NASA Astrophysics Data System (ADS)
Asgarieh, Eliyar; Moaveni, Babak; Barbosa, Andre R.; Chatzi, Eleni
2017-02-01
This paper investigates the effects of different factors on the performance of nonlinear model updating for a seven-story shear wall building model. The accuracy of calibrated models using different data features and modeling assumptions is studied by comparing the time and frequency responses of the models with the exact simulated ones. Simplified nonlinear finite element models of the shear wall building are calibrated so that the misfit between the considered response data features of the models and the structure is minimized. A refined FE model of the test structure, which was calibrated manually to match the shake table test data, is used instead of the real structure for this performance evaluation study. The simplified parsimonious FE models are composed of simple nonlinear beam-column fiber elements with nonlinearity infused in them by assigning generated hysteretic nonlinear material behaviors to uniaxial stress-strain relationship of the fibers. Four different types of data features and their combinations are used for model calibration: (1) time-varying instantaneous modal parameters, (2) displacement time histories, (3) acceleration time histories, and (4) dissipated hysteretic energy. It has been observed that the calibrated simplified FE models can accurately predict the nonlinear structural response in the absence of significant modeling errors. In the last part of this study, the physics-based models are further simplified for casting into state-space formulation and a real-time identification is performed using an Unscented Kalman filter. It has been shown that the performance of calibrated state-space models can be satisfactory when reasonable modeling assumptions are used.
Vargha, Gergely; Milton, Martin; Cox, Maurice; Kamvissis, Sarantis
2005-01-14
Quantitative analysis of natural gas depends on the calibration of a gas chromatograph with certified gas mixtures and the determination of a response relationship for each species by regression analysis. The uncertainty in this calibration is dominated by variations in the amount of the sample used for each analysis that are strongly correlated for all species measured in the same run. The "harmonisation" method described here minimises the influence of these correlations on the calculated calibration curves and leads to a reduction in the root-mean-square residual deviations from the fitted curve of a factor between 2 and 5. Consequently, it removes the requirement for each run in the calibration procedure to be carried out under the same external conditions, and opens the possibility that new data, measured under different environmental or instrumental conditions, can be appended to an existing calibration database.
NASA Astrophysics Data System (ADS)
Noh, Jeong Hoon; Seo, Jeong Min; Hwang, Beong Bok
2011-04-01
This paper is concerned with an analysis of the sensitivity of friction calibration curves to the frictional shear factor in a ring compression test. The main objective of this study is to examine the sensitivity of the FEA calibration curves of a ring compression test to the frictional shear factor. Different calibration curves were investigated by measuring dimensional changes at different positions of a ring specimen, including changes in the internal diameter at the middle and top section of the specimen, the outer diameter at the middle and top section, and the degree of surface expansion at the top surface. The initial ring geometries employed in the analysis maintain a fixed ratio of 6:3:2, i.e., the outer diameter: inner diameter: thickness ratio of the ring specimen, which is generally known as a `standard' specimen, in order only to determine the sensitivity of the calibration curves for the measurement of dimensional changes at different positions to the frictional shear factor. A perfectly plastic material was modeled for the simulations using rigid-plastic finite element code. Analyses were performed within a definite range of friction as well as over the entire range of friction to uncover the different sensitivities of calibration curves to interfacial friction given different ranges of friction. The results of this investigation are summarized in terms of a dimensionless gradient. It was determined from the results that the friction calibration curves according to measurements of the dimensional changes at different positions of a ring specimen show different degrees of linearity and sensitivity to the frictional condition on the contact surface. Among these differences, the friction calibration curve upon changes in the degree of surface expansion at the contact boundary was found to be relatively linear and sensitive to the frictional condition over the entire range of friction.
NASA Astrophysics Data System (ADS)
Brown, James W.; Brown, Otis B.; Evans, Robert H.
1993-10-01
A detailed reanalysis of the calibration procedures for the National Oceanic and Atmospheric Administration (NOAA) advanced very high resolution radiometer (AVHRR) based on thermal-vacuum test data was performed as part of the National Air and Space Administration/NOAA AVHRR Pathfinder Project. This effort, a followup to work by Brown et al. (1985), was motivated by the finding that the AVHRR instruments on several NOAA platforms have been routinely operated outside the range of thermal-vacuum test results, and thus one could not interpolate nonlinear corrections directly from earlier methods. These new calibration procedures permit calculation of nonlinear temperature corrections for any AVHRR operating temperature based on a second-order polynomial regression with a total calibration accuracy relative to an external calibration standard of less than two digital counts (±0.2°C). Such an improvement is quite important to the absolute accuracy of surface thermal fields, which are derived from these data utilizing various multichannel atmospheric water vapor correction schemes. We find systematic differences in the newly derived nonlinear correction results and those reported previously by Weinreb et al. (1990) and the original reference material in the various addenda to NOAA NESS Technical Memorandum 107 (Lauritson et al., 1979). Calibration results for various AVHRR radiometers show instrument-similar corrections for each band. Radiometers on NOAA platforms 8-12 demonstrate similar nonlinearities.
Nonlinear Gompertz Curve Models of Achievement Gaps in Mathematics and Reading
ERIC Educational Resources Information Center
Cameron, Claire E.; Grimm, Kevin J.; Steele, Joel S.; Castro-Schilo, Laura; Grissmer, David W.
2015-01-01
This study examined achievement trajectories in mathematics and reading from school entry through the end of middle school with linear and nonlinear growth curves in 2 large longitudinal data sets (National Longitudinal Study of Youth--Children and Young Adults and Early Childhood Longitudinal Study--Kindergarten Cohort [ECLS-K]). The S-shaped…
Nonlinear Gompertz Curve Models of Achievement Gaps in Mathematics and Reading
ERIC Educational Resources Information Center
Cameron, Claire E.; Grimm, Kevin J.; Steele, Joel S.; Castro-Schilo, Laura; Grissmer, David W.
2015-01-01
This study examined achievement trajectories in mathematics and reading from school entry through the end of middle school with linear and nonlinear growth curves in 2 large longitudinal data sets (National Longitudinal Study of Youth--Children and Young Adults and Early Childhood Longitudinal Study--Kindergarten Cohort [ECLS-K]). The S-shaped…
Critical Curve for p- q Systems of Nonlinear Wave Equations in Three Space Dimensions
NASA Astrophysics Data System (ADS)
Agemi, Rentaro; Kurokawa, Yuki; Takamura, Hiroyuki
2000-10-01
The existence of the critical curve for p-q systems for nonlinear wave equations was already established by D. Del Santo, V. Georgiev, and E. Mitidieri [1997, Global existence of the solutions and formation of singularities for a class of hyperbolic systems, in “Geometric Optics and Related Topics” (F. Colombini and N. Lerner, Eds.), Progress in Nonlinear Differential Equations and Their Applications, Vol. 32, pp. 117-139, Birkhäuser, Basel] except for the critical case. Our main purpose is to prove a blow-up theorem for which the nonlinearity (p, q) is just on the critical curve in three space dimensions. Moreover, the lower and upper bounds of the lifespan of solutions are precisely estimated, including the sub-critical case.
NASA Astrophysics Data System (ADS)
Duc, Nguyen Dinh; Quan, Tran Quoc
2012-09-01
An analytical investigation into the nonlinear response of thick functionally graded double-curved shallow panels resting on elastic foundations and subjected to thermal and thermomechanical loads is presented. Young's modulus and Poisson's ratio are both graded in the thickness direction according to a simple power-law distribution in terms of volume fractions of constituents. All formulations are based on the classical shell theory with account of geometrical nonlinearity and initial geometrical imperfection in the cases of Pasternak-type elastic foundations. By applying the Galerkin method, explicit relations for the thermal load-deflection curves of simply supported curved panels are found. The effects of material and geometrical properties and foundation stiffness on the buckling and postbuckling load-carrying capacity of the panels in thermal environments are analyzed and discussed.
Xu, Guan; Sun, Lina; Li, Xiaotao; Su, Jian; Hao, Zhaobing; Lu, Xue
2014-09-08
We demonstrate a global calibration method for the laser plane using a 3D calibration board to generate the two horizontal coordinates and a height gauge to generate the height coordinate of the point in the laser plane. A sigmoid-Gaussian function for the candidate centers is employed to normalize the eigenvalues of the Hessian matrix to prevent centers missing or muti-centers. Then camera calibration and laser plane calibration are accomplished at the same time. Finally the reconstructed 3D points are transformed to the horizontal plane by the forward process that involves one translation and two rotations. The parametric equation of the 3D curve is reconstructed by the inverse process that performs on the 2D fitting curve.
A non-linear camera calibration with modified teaching-learning-based optimization algorithm
NASA Astrophysics Data System (ADS)
Zhang, Buyang; Yang, Hua; Yang, Shuo
2015-12-01
In this paper, we put forward a novel approach based on hierarchical teaching-and-learning-based optimization (HTLBO) algorithm for nonlinear camera calibration. This algorithm simulates the teaching-learning ability of teachers and learners of a classroom. Different from traditional calibration approach, the proposed technique can find the nearoptimal solution without the need of accurate initial parameters estimation (with only very loose parameter bounds). With the introduction of cascade of teaching, the convergence speed is rapid and the global search ability is improved. Results from our study demonstrate the excellent performance of the proposed technique in terms of convergence, accuracy, and robustness. The HTLBO can also be used to solve many other complex non-linear calibration optimization problems for its good portability.
NASA Astrophysics Data System (ADS)
Hirose, Shigeo; Yoneda, Kan
A six-axial force sensor using the optical measuring technique and its nonlinear calibration method are proposed. The force sensor is based on a unit which has a small light source set face to face with a photosensor of the quarter-splitting type to measure minute displacements in two directions with respect to each other. Three sets of the unit are held by an elastic frame. The sensor, in comparison with the conventional strain-gauge-based device, is more compact, light in weight, low in cost, and accurate. The high accuracy of the sensor comes from the calibration method in which nonlinear interferences of six-axial force are considered. An experimental setup in which the six-axial force sensor could be simultaneously input with plural axial forces was produced, and the proposed calibration method was shown to be valid.
Solvent free energy curves for electron transfer reactions: A nonlinear solvent response model
NASA Astrophysics Data System (ADS)
Ichiye, Toshiko
1996-05-01
Marcus theory for electron transfer assumes a linear response of the solvent so that both the reactant and product free energy curves are parabolic functions of the solvent polarization, each with the same solvent force constant k characterizing the curvature. Simulation data by other workers indicate that the assumption of parabolic free energy curves is good for the Fe2+-Fe3+ self-exchange reaction but that the k of the reactant and product free energy curves are different for the reaction D0+A0→D1-+A1+. However, the fluctuations sampled in these simulations were not large enough to reach the activation barrier region, which was thus treated either by umbrella sampling or by parabolic extrapolation. Here, we present free energy curves calculated from a simple model of ionic solvation developed in an earlier paper by Hyun, Babu, and Ichiye, which we refer to here as the HBI model. The HBI model describes the nonlinearity of the solvent response due to the orientation of polar solvent molecules. Since it is a continuum model, it may be considered the first-order nonlinear correction to the linear response Born model. Moreover, in the limit of zero charge or infinite radius, the Born model and the Marcus relations are recovered. Here, the full free energy curves are calculated using analytic expressions from the HBI model. The HBI reactant and product curves have different k for D0+A0→D1-+A1+ as in the simulations, but examining the full curves shows they are nonparabolic due to the nonlinear response of the solvent. On the other hand, the HBI curves are close to parabolic for the Fe2+-Fe3+ reaction, also in agreement with simulations, while those for another self-exchange reaction D0-A1+ show greater deviations from parabolic behavior than the Fe2+-Fe3+ reaction. This indicates that transitions from neutral to charged species will have the largest deviations. Thus, the second moment of the polarization is shown to be a measure of the deviation from Marcus
Witt, Matthias; Weber, Uli; Kellner, Daniel; Engenhart-Cabillic, Rita; Zink, Klemens
2015-09-01
For CT-based dose calculation in ion therapy a link between the attenuation coefficients of photons and the stopping-power of particles has to be provided. There are two commonly known approaches to establish such a calibration curve, the stoichiometric calibration and direct measurements with tissue substitutes or animal samples. Both methods were investigated and compared. As input for the stoichiometric calibration the data from ICRP-report 23 were compared to newly available data from ICRP-report 110. By employing the newer data no relevant difference could be observed. The differences between the two acquisition methods (direct measurement and stoichiometric calibration) were systematically analyzed and quantified. The most relevant change was caused by the exchange of carbon and oxygen content in the substitutes in comparison to the data of the ICRP-reports and results in a general overshoot of the Bragg peak. The consequence of the differences between the calibration curves was investigated with treatment planning studies and iso-range surfaces. Range differences up to 6mm in treatment plans of the head were observed. Additionally two improvements are suggested which increase the accuracy of the calibration curve. Copyright © 2014. Published by Elsevier GmbH.
Exact Nonlinear Fourth-order Equation for Two Coupled Oscillators: Metamorphoses of Resonance Curves
NASA Astrophysics Data System (ADS)
Kyzioł, J.; Okniński, A.
We study dynamics of two coupled periodically driven oscillators. The internal motion is separated off exactly to yield a nonlinear fourth-order equation describing inner dynamics. Periodic steady-state solutions of the fourth-order equation are determined within the Krylov-Bogoliubov-Mitropolsky approach - we compute the amplitude profiles, which from mathematical point of view are algebraic curves. In the present paper we investigate metamorphoses of amplitude profiles induced by changes of control parameters near singular points of these curves. It follows that dynamics changes qualitatively in the neighbourhood of a singular point.
Optimizing a nonlinear mathematical approach for the computerized analysis of mood curves.
Möller, H J; Leitner, M
1987-01-01
A nonlinear mathematical model for computerized description of mood curves is presented. This model reaches a high goodness of fit to the real data. It seems superior to two other models recently proposed. Using this model in a computer program for describing the mood data of a large sample of inpatients, significant and clinically meaningful group differences between the mood curves of schizophrenic, endogenous-depressive, and neurotic-depressive inpatients could be demonstrated. The application of the methodology might be helpful, e.g. in the field of evaluative research.
SU-E-T-391: Evaluation of Image Parameters Impact On the CT Calibration Curve for Proton Therapy
Xiao, Z; Reyhan, M; Huang, Q; Zhang, M; Yue, N; Chen, T
2015-06-15
Purpose: The calibration of the Hounsfield units (HU) to relative proton stopping powers (RSP) is a crucial component in assuring the accurate delivery of proton therapy dose distributions to patients. The purpose of this work is to assess the uncertainty of CT calibration considering the impact of CT slice thickness, position of the plug within the phantom and phantom sizes. Methods: Stoichiometric calibration method was employed to develop the CT calibration curve. Gammex 467 tissue characterization phantom was scanned in Tomotherapy Cheese phantom and Gammex 451 phantom by using a GE CT scanner. Each plug was individually inserted into the same position of inner and outer ring of phantoms at each time, respectively. 1.25 mm and 2.5 mm slice thickness were used. Other parameters were same. Results: HU of selected human tissues were calculated based on fitted coefficient (Kph, Kcoh and KKN), and RSP were calculated according to the Bethe-Bloch equation. The calibration curve was obtained by fitting cheese phantom data with 1.25 mm thickness. There is no significant difference if the slice thickness, phantom size, position of plug changed in soft tissue. For boney structure, RSP increases up to 1% if the phantom size and the position of plug changed but keep the slice thickness the same. However, if the slice thickness varied from the one in the calibration curve, 0.5%–3% deviation would be expected depending on the plug position. The Inner position shows the obvious deviation (averagely about 2.5%). Conclusion: RSP shows a clinical insignificant deviation in soft tissue region. Special attention may be required when using a different slice thickness from the calibration curve for boney structure. It is clinically practical to address 3% deviation due to different thickness in the definition of clinical margins.
Design curves for non-linear analysis of simply-supported, uniformly-loaded rectangular plates
NASA Technical Reports Server (NTRS)
Moore, D.
1979-01-01
Design curves for the non-linear analysis of simply-supported rectangular plates subjected to uniform normal pressure loads have been developed. These curves yield the center deflection, center stress and corner stress in non-dimensionalized form plotted against a dimensionless parameter describing the load intensity. The results presented are based on extensive non-linear finite element analysis employing the ARGUS structural analysis program. Plates with length to width ratios of 1, 1.5, 2, 3 and 4 are included. The load range considered extends to 1000 times the load at which the behavior of the plate becomes significantly non-linear. Over the load range considered, the analysis shows that the ratio of center deflection to plate thickness for a square plate is less than 16 to 1, whereas linear theory would predict a center deflection 400 times the plate thickness. Likewise, the stress is markedly lower than would be predicted by linear theory. The present results are shown to be in excellent agreement with the classical linear theory up to a central deflection to plate thickness ratio of about one-half. In the non-linear regime the present results for deflection and stress are in very good agreement with the analytical and experimental work of other investigators.
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...
INFLUENCE OF IRON CHELATION ON R1 AND R2 CALIBRATION CURVES IN GERBIL LIVER AND HEART
Wood, John C.; Aguilar, Michelle; Otto-Duessel, Maya; Nick, Hanspeter; Nelson, Marvin D.; Moats, Rex
2008-01-01
MRI is gaining increasing importance for the noninvasive quantification of organ iron burden. Since transverse relaxation rates depend on iron distribution as well as iron concentration, physiologic and pharmacologic processes that alter iron distribution could change MRI calibration curves. This paper compares the effect of three iron chelators, deferoxamine, deferiprone, and deferasirox on R1 and R2 calibration curves according to two iron loading and chelation strategies. 33 Mongolian gerbils underwent iron loading (iron dextran 500 mg/kg/wk) for 3 weeks followed by 4 weeks of chelation. An additional 56 animals received less aggressive loading (200 mg/kg/week) for 10 weeks, followed by 12 weeks of chelation. R1 and R2 calibration curves were compared to results from 23 iron-loaded animals that had not received chelation. Acute iron loading and chelation biased R1 and R2 from the unchelated reference calibration curves but chelator-specific changes were not observed, suggesting physiologic rather than pharmacologic differences in iron distribution. Long term chelation deferiprone treatment increased liver R1 50% (p<0.01), while long term deferasirox lowered liver R2 30.9% (p<0.0001). The relationship between R1 and R2 and organ iron concentration may depend upon the acuity of iron loading and unloading as well as the iron chelator administered. PMID:18581418
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...
Tripathy, S P; Sahoo, G S; Paul, S; Kumar, P; Sharma, S D; Santra, S; Pal, A; Kundu, A; Bandyopadhyay, T; Avasthi, D K
2017-06-01
Microwave induced chemical etching (MICE) has been established as a faster and improved technique compared to other contemporary etching techniques for the development of tracks in a CR-39 detector. However, the methodology could not be applied for LET (linear energy transfer) spectrometry due to lack of a calibration curve using this method. For this purpose, a new LET calibration curve in the range of 12 keV/μm-799 keV/μm was generated considering different ions such as H, Li, C, O, and F on CR-39 having different LETs in water. An empirical relation was established from the obtained calibration curve for determining the value of LET (in water) from the value of V, the ratio of track etch rate to bulk etch rate. For application of this calibration curve in neutron dosimetry, CR-39 detectors were irradiated to neutrons generated from 120 and 142 MeV (16)O+(27)Al systems followed by a similar MICE procedure. The absorbed dose (DLET) and the dose equivalent (HLET) were obtained from the LET spectra and were found to be 13% and 10% higher for 142 MeV (16)O+(27)Al system than those for 120 MeV (16)O+(27)Al system, respectively. The outcome of the study demonstrates the possibility of using the MICE technique for neutron dose estimation by CR-39 via LET spectrometry.
NASA Astrophysics Data System (ADS)
Tripathy, S. P.; Sahoo, G. S.; Paul, S.; Kumar, P.; Sharma, S. D.; Santra, S.; Pal, A.; Kundu, A.; Bandyopadhyay, T.; Avasthi, D. K.
2017-06-01
Microwave induced chemical etching (MICE) has been established as a faster and improved technique compared to other contemporary etching techniques for the development of tracks in a CR-39 detector. However, the methodology could not be applied for LET (linear energy transfer) spectrometry due to lack of a calibration curve using this method. For this purpose, a new LET calibration curve in the range of 12 keV/μm-799 keV/μm was generated considering different ions such as H, Li, C, O, and F on CR-39 having different LETs in water. An empirical relation was established from the obtained calibration curve for determining the value of LET (in water) from the value of V, the ratio of track etch rate to bulk etch rate. For application of this calibration curve in neutron dosimetry, CR-39 detectors were irradiated to neutrons generated from 120 and 142 MeV 16O+27Al systems followed by a similar MICE procedure. The absorbed dose (DLET) and the dose equivalent (HLET) were obtained from the LET spectra and were found to be 13% and 10% higher for 142 MeV 16O+27Al system than those for 120 MeV 16O+27Al system, respectively. The outcome of the study demonstrates the possibility of using the MICE technique for neutron dose estimation by CR-39 via LET spectrometry.
Zhu, Feipeng; Shi, Hongjian; Bai, Pengxiang; Lei, Dong; He, Xiaoyuan
2013-11-10
A mathematical description of the absolute surface height distribution in generalized fringe projection profilometry under large measuring depth range is presented. Based on least-squares polynomial fitting, a nonlinear calibration to determine the mapping between phase change and surface height is proposed by considering the unequal height arrangement of the projector and the camera. To solve surface height from phase change, an iteration method is brought forward. Experiments are implemented to demonstrate the validity of the proposed calibration and an accuracy of 0.3 mm for surface profile under 300 mm measuring depth can be achieved.
Liu, Song; Su, Bo-min; Li, Qing-hui; Gan, Fu-xi
2015-01-01
The authors tried to find a method for quantitative analysis using pXRF without solid bulk stone/jade reference samples. 24 nephrite samples were selected, 17 samples were calibration samples and the other 7 are test samples. All the nephrite samples were analyzed by Proton induced X-ray emission spectroscopy (PIXE) quantitatively. Based on the PIXE results of calibration samples, calibration curves were created for the interested components/elements and used to analyze the test samples quantitatively; then, the qualitative spectrum of all nephrite samples were obtained by pXRF. According to the PIXE results and qualitative spectrum of calibration samples, partial least square method (PLS) was used for quantitative analysis of test samples. Finally, the results of test samples obtained by calibration method, PLS method and PIXE were compared to each other. The accuracy of calibration curve method and PLS method was estimated. The result indicates that the PLS method is the alternate method for quantitative analysis of stone/jade samples.
Improved calibration of the nonlinear regime of a single-beam gradient optical trap.
Wilcox, Jamianne C; Lopez, Benjamin J; Campàs, Otger; Valentine, Megan T
2016-05-15
We report an improved method for calibrating the nonlinear region of a single-beam gradient optical trap. Through analysis of the position fluctuations of a trapped object that is displaced from the trap center by controlled flow we measure the local trap stiffness in both the linear and nonlinear regimes without knowledge of the magnitude of the applied external forces. This approach requires only knowledge of the system temperature, and is especially useful for measurements involving trapped objects of unknown size, or objects in a fluid of unknown viscosity.
Tracing Analytic Ray Curves for Light and Sound Propagation in Non-Linear Media.
Mo, Qi; Yeh, Hengchin; Manocha, Dinesh
2016-11-01
The physical world consists of spatially varying media, such as the atmosphere and the ocean, in which light and sound propagates along non-linear trajectories. This presents a challenge to existing ray-tracing based methods, which are widely adopted to simulate propagation due to their efficiency and flexibility, but assume linear rays. We present a novel algorithm that traces analytic ray curves computed from local media gradients, and utilizes the closed-form solutions of both the intersections of the ray curves with planar surfaces, and the travel distance. By constructing an adaptive unstructured mesh, our algorithm is able to model general media profiles that vary in three dimensions with complex boundaries consisting of terrains and other scene objects such as buildings. Our analytic ray curve tracer with the adaptive mesh improves the efficiency considerably over prior methods. We highlight the algorithm's application on simulation of visual and sound propagation in outdoor scenes.
Pajic, J; Rakic, B; Jovicic, D; Milovanovic, A
2014-10-01
Biological dosimetry using chromosome damage biomarkers is a valuable dose assessment method in cases of radiation overexposure with or without physical dosimetry data. In order to estimate dose by biodosimetry, any biological dosimetry service have to have its own dose response calibration curve. This paper reveals the results obtained after irradiation of blood samples from fourteen healthy male and female volunteers in order to establish biodosimetry in Serbia and produce dose response calibration curves for dicentrics and micronuclei. Taking into account pooled data from all the donors, the resultant fitted curve for dicentrics is: Ydic=0.0009 (±0.0003)+0.0421 (±0.0042)×D+0.0602 (±0.0022)×D(2); and for micronuclei: Ymn=0.0104 (±0.0015)+0.0824 (±0.0050)×D+0.0189 (±0.0017)×D(2). Following establishment of the dose response curve, a validation experiment was carried out with four blood samples. Applied and estimated doses were in good agreement. On this basis, the results reported here give us confidence to apply both calibration curves for future biological dosimetry requirements in Serbia. Copyright © 2014 Elsevier B.V. All rights reserved.
Scanning Electron Microscope Calibration Using a Multi-Image Non-Linear Minimization Process
NASA Astrophysics Data System (ADS)
Cui, Le; Marchand, Éric
2015-04-01
A scanning electron microscope (SEM) calibrating approach based on non-linear minimization procedure is presented in this article. A part of this article has been published in IEEE International Conference on Robotics and Automation (ICRA), 2014. . Both the intrinsic parameters and the extrinsic parameters estimations are achieved simultaneously by minimizing the registration error. The proposed approach considers multi-images of a multi-scale calibration pattern view from different positions and orientations. Since the projection geometry of the scanning electron microscope is different from that of a classical optical sensor, the perspective projection model and the parallel projection model are considered and compared with distortion models. Experiments are realized by varying the position and the orientation of a multi-scale chessboard calibration pattern from 300× to 10,000×. The experimental results show the efficiency and the accuracy of this approach.
Application of non-linear automatic optimization techniques for calibration of HSPF.
Iskra, Igor; Droste, Ronald
2007-06-01
Development of TMDLs (total maximum daily loads) is often facilitated by using the software system BASINS (Better Assessment Science Integrating point and Nonpoint Sources). One of the key elements of BASINS is the watershed model HSPF (Hydrological Simulation Program Fortran) developed by USEPA. Calibration of HSPF is a very tedious and time consuming task, more than 100 parameters are involved in the calibration process. In the current research, three non-linear automatic optimization techniques are applied and compared, as well an efficient way to calibrate HSPF is suggested. Parameter optimization using local and global optimization techniques for the watershed model is discussed. Approaches to automatic calibration of HSPF using the nonlinear parameter estimator PEST (Parameter Estimation Tool) with its Gauss-Marquardt-Levenberg (GML) method, Random multiple Search Method (RSM), and Shuffled Complex Evolution method developed at the University of Arizona (SCE-UA) are presented. Sensitivity analysis was conducted and the most and the least sensitive parameters were identified. It was noted that sensitivity depends on number of adjustable parameters. As more parameters were optimized simultaneously--a wider range of parameter values can maintain the model in the calibrated state. Impact of GML, RSM, and SCE-UA variables on ability to find the global minimum of the objective function (OF) was studied and the best variables are suggested. All three methods proved to be more efficient than manual HSPF calibration. Optimization results obtained by these methods are very similar, although in most cases RSM outperforms GML and SCE-UA outperforms RSM. GML is a very fast method, it can perform as well as SCE-UA when the variables are properly adjusted, initial guess is good and insensitive parameters are eliminated from the optimization process. SCE-UA is very robust and convenient to use. Logical definition of key variables in most cases leads to the global minimum.
Influence of nonlinearity on transition curves in a parametric pendulum system
NASA Astrophysics Data System (ADS)
Zhen, Bin; Xu, Jian; Song, Zigen
2017-01-01
In this paper transition curves and periodic solutions of a parametric pendulum system are calculated analytically by employing the energy method. In previous studies this problem usually was dealt with by using the asymptotic method which is limited by small parameter. In our research, the hypothesis of small number in the pendulum system is not necessary, some different conclusions are obtained on the impacts of nonlinearity in the pendulum system on the transition curves in the parametric plane. The results based on the asymptotic method suggested that nonlinearity in the pendulum system only significantly causes decrease of the area of the stable regions in the parametric plane when the angular displacement of the pendulum is not very small. However, our analysis according to the energy method shows that nonlinearity does not significantly change the area of the stable regions in the parametric plane, but notably alter positions of the stable regions. Furthermore, position of the stable regions to a large extent is related to the amplitude of periodic vibrations of the pendulum especially when the angular displacement of the pendulum is large enough. Our results are very different from that reported in previous studies, which have been verified by numerical simulations.
Calibration of the nonlinear ring model at the Diamond Light Source
NASA Astrophysics Data System (ADS)
Bartolini, R.; Martin, I. P. S.; Rehm, G.; Schmidt, F.
2011-05-01
Nonlinear beam dynamics plays a crucial role in defining the performance of a storage ring. The beam lifetime, the injection efficiency, and the dynamic and momentum apertures available to the beam are optimized during the design phase by a proper optimization of the linear lattice and of the distribution of sextupole families. The correct implementation of the design model, especially the nonlinear part, is a nontrivial accelerator physics task. Several parameters of the nonlinear dynamics can be used to compare the real machine with the model and eventually to correct the accelerator. Most of these parameters are extracted from the analysis of turn-by-turn data after the excitation of betatron oscillations of the particles in the ring. We present the experimental results of the campaign of measurements carried out at the Diamond storage ring to characterize the nonlinear beam dynamics. A combination of frequency map analysis with the detuning with momentum measurements has allowed for a precise calibration of the nonlinear model that can accurately reproduce the nonlinear beam dynamics in Diamond.
Nonlinear I-V Curve at a Quantum Impurity Quantum Critical Point
NASA Astrophysics Data System (ADS)
Baranger, Harold; Chung, Chung-Hou; Lin, Chao-Yun; Zhang, Gu; Ke, Chung-Ting; Finkelstein, Gleb
The nonlinear I-V curve at an interacting quantum critical point (QCP) is typically out of reach theoretically. Here, however, we provide a striking example of an analytical calculation of the full nonlinear I-V curve at the QCP. The system that we consider is a quantum dot coupled to resistive leads - a spinless resonant level interacting with an ohmic EM environment in which a QCP similar to the two-channel Kondo QCP occurs. Recent experiments studied this criticality via transport measurements: the transmission approaches unity at low temperature and applied bias when tuned exactly to the QCP (on resonance and symmetric tunnel barriers) and approaches zero in all other cases. To obtain the current at finite temperature and arbitrary bias, we write the problem as a one-dimensional field theory and transform from electrons in the left/right leads to right-going and left-going channels between which there is weak two-body backscattering. Drawing on dynamical Coulomb blockade theory, we thus obtain an analytical expression for the full I-V curve. The agreement with the experimental result is remarkable.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Vazquez-Leal, H; Jimenez-Fernandez, V M; Benhammouda, B; Filobello-Nino, U; Sarmiento-Reyes, A; Ramirez-Pinero, A; Marin-Hernandez, A; Huerta-Chua, J
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation.
Digiuni, Simona; Berne-Dedieu, Annik; Martinez-Torres, Cristina; Szecsi, Judit; Bendahmane, Mohammed; Arneodo, Alain; Argoul, Françoise
2015-05-05
Individual plant cells are rather complex mechanical objects. Despite the fact that their wall mechanical strength may be weakened by comparison with their original tissue template, they nevertheless retain some generic properties of the mother tissue, namely the viscoelasticity and the shape of their walls, which are driven by their internal hydrostatic turgor pressure. This viscoelastic behavior, which affects the power-law response of these cells when indented by an atomic force cantilever with a pyramidal tip, is also very sensitive to the culture media. To our knowledge, we develop here an original analyzing method, based on a multiscale decomposition of force-indentation curves, that reveals and quantifies for the first time the nonlinearity of the mechanical response of living single plant cells upon mechanical deformation. Further comparing the nonlinear strain responses of these isolated cells in three different media, we reveal an alteration of their linear bending elastic regime in both hyper- and hypotonic conditions.
Nonlinear Radiative Heat Transfer in Blasius and Sakiadis Flows Over a Curved Surface
NASA Astrophysics Data System (ADS)
Naveed, M.; Abbas, Z.; Sajid, M.
2017-01-01
This study investigates the heat transfer characteristics for Blasius and Sakiadis flows over a curved surface coiled in a circle of radius R having constant curvature. Effects of thermal radiation are also analyzed for nonlinear Rosseland approximation which is valid for all values of the temperature difference between the fluid and the surface. The considered physical situation is represented by a mathematical model using curvilinear coordinates. Similar solutions of the developed partial differential equations are evaluated numerically using a shooting algorithm. Fluid velocity, skin-friction coefficient, temperature and local Nusselt number are the quantities of interest interpreted for the influence of pertinent parameters. A comparison of the present and the published data for a flat surface validates the obtained numerical solution for the curved geometry.
NASA Astrophysics Data System (ADS)
Guo, Kongming; Jiang, Jun; Xu, Yalan
2016-09-01
In this paper, a simple but accurate semi-analytical method to approximate probability density function of stochastic closed curve attractors is proposed. The expression of distribution applies to systems with strong nonlinearities, while only weak noise condition is needed. With the understanding that additive noise does not change the longitudinal distribution of the attractors, the high-dimensional probability density distribution is decomposed into two low-dimensional distributions: the longitudinal and the transverse probability density distributions. The longitudinal distribution can be calculated from the deterministic systems, while the probability density in the transverse direction of the curve can be approximated by the stochastic sensitivity function method. The effectiveness of this approach is verified by comparing the expression of distribution with the results of Monte Carlo numerical simulations in several planar systems.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
Faggiano, Serena; Ronda, Luca; Bruno, Stefano; Jankevics, Hanna; Mozzarelli, Andrea
2010-06-15
Dynamic light scattering (DLS) is a technique capable of determining the hydrodynamic radius of proteins. From this parameter, a molecular weight can be assessed provided that an appropriate calibration curve is available. To this goal, a globin-based calibration curve was used to determine the polymerization state of a recombinant hemoglobin-based oxygen carrier and to assess the equivalent molecular weight of hemoglobins conjugated with polyethylene glycol molecules. The good agreement between DLS values and those obtained from gel filtration chromatography is a consequence of the high similarity in structure, shape, and density within the globin superfamily. Moreover, globins and heme proteins in general share similar spectroscopic properties, thereby reducing possible systematic errors associated with the absorption of the probe radiation by the chromophore.
NASA Astrophysics Data System (ADS)
Zuev, Vladimir V.; Gerasimov, Vladislav V.; Pravdin, Vladimir L.; Pavlinskiy, Aleksei V.; Nakhtigalova, Daria P.
2017-01-01
Among lidar techniques, the pure rotational Raman (PRR) technique is the best suited for tropospheric and lower stratospheric temperature measurements. Calibration functions are required for the PRR technique to retrieve temperature profiles from lidar remote sensing data. Both temperature retrieval accuracy and number of calibration coefficients depend on the selected function. The commonly used calibration function (linear in reciprocal temperature 1/T with two calibration coefficients) ignores all types of broadening of individual PRR lines of atmospheric N2 and O2 molecules. However, the collisional (pressure) broadening dominates over other types of broadening of PRR lines in the troposphere and can differently affect the accuracy of tropospheric temperature measurements depending on the PRR lidar system. We recently derived the calibration function in the general analytical form that takes into account the collisional broadening of all N2 and O2 PRR lines (Gerasimov and Zuev, 2016). This general calibration function represents an infinite series and, therefore, cannot be directly used in the temperature retrieval algorithm. For this reason, its four simplest special cases (calibration functions nonlinear in 1/T with three calibration coefficients), two of which have not been suggested before, were considered and analyzed. All the special cases take the collisional PRR lines broadening into account in varying degrees and the best function among them was determined via simulation. In this paper, we use the special cases to retrieve tropospheric temperature from real PRR lidar data. The calibration function best suited for tropospheric temperature retrievals is determined from the comparative analysis of temperature uncertainties yielded by using these functions. The absolute and relative statistical uncertainties of temperature retrieval are given in an analytical form assuming Poisson statistics of photon counting. The vertical tropospheric temperature
Rastkhah, E; Zakeri, F; Ghoranneviss, M; Rajabpour, M R; Farshidpour, M R; Mianji, F; Bayat, M
2016-03-01
An in vitro study of the dose responses of human peripheral blood lymphocytes was conducted with the aim of creating calibrated dose-response curves for biodosimetry measuring up to 4 Gy (0.25-4 Gy) of gamma radiation. The cytokinesis-blocked micronucleus (CBMN) assay was employed to obtain the frequencies of micronuclei (MN) per binucleated cell in blood samples from 16 healthy donors (eight males and eight females) in two age ranges of 20-34 and 35-50 years. The data were used to construct the calibration curves for men and women in two age groups, separately. An increase in micronuclei yield with the dose in a linear-quadratic way was observed in all groups. To verify the applicability of the constructed calibration curve, MN yields were measured in peripheral blood lymphocytes of two real overexposed subjects and three irradiated samples with unknown dose, and the results were compared with dose values obtained from measuring dicentric chromosomes. The comparison of the results obtained by the two techniques indicated a good agreement between dose estimates. The average baseline frequency of MN for the 130 healthy non-exposed donors (77 men and 55 women, 20-60 years old divided into four age groups) ranged from 6 to 21 micronuclei per 1000 binucleated cells. Baseline MN frequencies were higher for women and for the older age group. The results presented in this study point out that the CBMN assay is a reliable, easier and valuable alternative method for biological dosimetry.
Nonlinear Analysis and Post-Test Correlation for a Curved PRSEUS Panel
NASA Technical Reports Server (NTRS)
Gould, Kevin; Lovejoy, Andrew E.; Jegley, Dawn; Neal, Albert L.; Linton, Kim, A.; Bergan, Andrew C.; Bakuckas, John G., Jr.
2013-01-01
The Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) concept, developed by The Boeing Company, has been extensively studied as part of the National Aeronautics and Space Administration's (NASA s) Environmentally Responsible Aviation (ERA) Program. The PRSEUS concept provides a light-weight alternative to aluminum or traditional composite design concepts and is applicable to traditional-shaped fuselage barrels and wings, as well as advanced configurations such as a hybrid wing body or truss braced wings. Therefore, NASA, the Federal Aviation Administration (FAA) and The Boeing Company partnered in an effort to assess the performance and damage arrestments capabilities of a PRSEUS concept panel using a full-scale curved panel in the FAA Full-Scale Aircraft Structural Test Evaluation and Research (FASTER) facility. Testing was conducted in the FASTER facility by subjecting the panel to axial tension loads applied to the ends of the panel, internal pressure, and combined axial tension and internal pressure loadings. Additionally, reactive hoop loads were applied to the skin and frames of the panel along its edges. The panel successfully supported the required design loads in the pristine condition and with a severed stiffener. The panel also demonstrated that the PRSEUS concept could arrest the progression of damage including crack arrestment and crack turning. This paper presents the nonlinear post-test analysis and correlation with test results for the curved PRSEUS panel. It is shown that nonlinear analysis can accurately calculate the behavior of a PRSEUS panel under tension, pressure and combined loading conditions.
Nonlinear and Buckling Behavior of Curved Panels Subjected to Combined Loads
NASA Technical Reports Server (NTRS)
Hilburger, Mark W.; Nemeth, Michael P.; Starnes, James H., Jr.
2001-01-01
The results of an analytical study of the nonlinear and buckling response characteristics of curved panels subjected to combined loads are presented. Aluminum and laminated composite panels are considered in the study and a flat and shallow curved panel configurations are considered as well. The panels are subjected to combined axial compression and transverse tension or compression loads or combined axial compression and inplane shear loads. Results illustrating the effects of various combined load states on the buckling response of the panels are presented. In addition, results illustrating the effects of laminate orthotropy and anisotropy and panel curvature on the panel response are presented. The results indicate that panel curvature can have a significant effect on the nonlinear and buckling behavior of the panels subjected to combined loads. Results are included that show that geometrically perfect panels do not exhibit bifurcation points for some combined loads. Results are also presented that show the effects of laminate orthotropy and anisotropy on the interaction of combined loads.
Eshraghi, Iman; Jalali, Seyed K.; Pugno, Nicola Maria
2016-01-01
Imperfection sensitivity of large amplitude vibration of curved single-walled carbon nanotubes (SWCNTs) is considered in this study. The SWCNT is modeled as a Timoshenko nano-beam and its curved shape is included as an initial geometric imperfection term in the displacement field. Geometric nonlinearities of von Kármán type and nonlocal elasticity theory of Eringen are employed to derive governing equations of motion. Spatial discretization of governing equations and associated boundary conditions is performed using differential quadrature (DQ) method and the corresponding nonlinear eigenvalue problem is iteratively solved. Effects of amplitude and location of the geometric imperfection, and the nonlocal small-scale parameter on the nonlinear frequency for various boundary conditions are investigated. The results show that the geometric imperfection and non-locality play a significant role in the nonlinear vibration characteristics of curved SWCNTs. PMID:28773911
Eshraghi, Iman; Jalali, Seyed K; Pugno, Nicola Maria
2016-09-21
Imperfection sensitivity of large amplitude vibration of curved single-walled carbon nanotubes (SWCNTs) is considered in this study. The SWCNT is modeled as a Timoshenko nano-beam and its curved shape is included as an initial geometric imperfection term in the displacement field. Geometric nonlinearities of von Kármán type and nonlocal elasticity theory of Eringen are employed to derive governing equations of motion. Spatial discretization of governing equations and associated boundary conditions is performed using differential quadrature (DQ) method and the corresponding nonlinear eigenvalue problem is iteratively solved. Effects of amplitude and location of the geometric imperfection, and the nonlocal small-scale parameter on the nonlinear frequency for various boundary conditions are investigated. The results show that the geometric imperfection and non-locality play a significant role in the nonlinear vibration characteristics of curved SWCNTs.
NASA Astrophysics Data System (ADS)
Kántor, Tibor; de Loos-Vollebregt, Margaretha T. C.
2005-03-01
Carbon tetrachloride vapor as gaseous phase modifier in a graphite furnace electrothermal vaporizer (GFETV) converts heavy volatile analyte forms to volatile and medium volatile chlorides and produces aerosol carrier effect, the latter being a less generally recognized benefit. However, the possible increase of polyatomic interferences in inductively coupled plasma mass spectrometry (GFETV-ICP-MS) by chlorine and carbon containing species due to CCl 4 vapor introduction has been discouraging with the use of low resolution, quadrupole type MS equipment. Being aware of this possible handicap, it was aimed at to investigate the feasibility of the use of this halogenating agent in ICP-MS with regard of possible hazards to the instrument, and also to explore the advantages under these specific conditions. With sample gas flow (inner gas flow) rate not higher than 900 ml min -1 Ar in the torch and 3 ml min -1 CCl 4 vapor flow rate in the furnace, the long-term stability of the instrument was ensured and the following benefits by the halocarbon were observed. The non-linearity error (defined in the text) of the calibration curves (signal versus mass functions) with matrix-free solution standards was 30-70% without, and 1-5% with CCl 4 vapor introduction, respectively, at 1 ng mass of Cu, Fe, Mn and Pb analytes. The sensitivity for these elements increased by 2-4-fold with chlorination, while the relative standard deviation (RSD) was essentially the same (2-5%) for the two cases in comparison. A vaporization temperature of 2650 °C was required for Cr in Ar atmosphere, while 2200 °C was sufficient in Ar + CCl 4 atmosphere to attain complete vaporization. Improvements in linear response and sensitivity were the highest for this least volatile element. The pyrolytic graphite layer inside the graphite tube was protected by the halocarbon, and tube life time was further increased by using traces of hydrocarbon vapor in the external sheath gas of the graphite furnace. Details
Anderson, Marti J; Millar, Russell B; Blom, Wilma M; Diebel, Carol E
2005-12-01
von Bertalanffy curves were used to describe the nonlinear relationship between assemblages inhabiting holdfasts of the kelp Ecklonia radiata and the volume of the holdfast. This was done using nonlinear canonical analyses of principal coordinates (NCAP). The volume of the holdfast is a proxy for the age of the plant and, thus, the canonical axis is a proxy for succession in the marine invertebrate community inhabiting the holdfast. Analyses were done at several different taxonomic resolutions on the basis of various dissimilarity measures. Assemblages in relatively large holdfasts demonstrated ongoing variation in community structure with increasing volume when the dissimilarity used was independent of sample size. Smaller holdfasts had proportionately greater abundances of ophiuroids and encrusting organisms (bryozoans, sponges, ascidians), while larger holdfasts were characterised by proportionately greater abundances of crustaceans, polychaetes and molluscs. Such linear and nonlinear multivariate models may be applied to analyse system-level responses to the growth of many habitat-forming organisms, such as sponges, coral reefs, coralline algal turf or forest canopies.
Finsterle, S.; Kowalsky, M.B.
2010-10-15
We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-Gauss-Newton steps are taken for independent parameters with high impact. The performance of the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.
Independence of calibration curves for EBT Gafchromic films of the size of high-energy X-ray fields.
Cheung, Tsang; Butson, Martin J; Yu, Peter K N
2006-09-01
The EBT Gafchromic radiochromic film is a relatively new product designed specifically for dosimetry in radiation therapy. Due to the weak dependence of its response on the photon energy (variations are below 10% in the 50 kVp-10 MVp range), the film is ideal for dosimetry when the photon energy spectrum may be changing or unknown. In order to convert a map of optical densities into a map of absorbed radiation doses, a calibration curve constructed on the basis of standard calibration films is necessary. Our results have shown that, with the EBT Gafchromic film, one can use the same calibration curve for 6-MV X-ray fields of any size in the range from 5 x 5 cm(2) up to 40 x 40 cm(2). This is not the case for radiographic films, such as Kodak X-Omat V, whose response to the same dose varies approximately by 10% depending on the field size in this range. This insensitivity of the EBT Gafchromic film to size of the radiation field makes it possible to assess doses delivered by small radiation fields. With the help of this film, it was shown that the output factor for a 0.5 x 0.5 cm(2) field is 0.60+/-0.03 (2SD) relative to the 10 x 10 cm(2) field.
A non-linear piezoelectric actuator calibration using N-dimensional Lissajous figure
NASA Astrophysics Data System (ADS)
Albertazzi, A.; Viotti, M. R.; Veiga, C. L. N.; Fantin, A. V.
2016-08-01
Piezoelectric translators (PZTs) are very often used as phase shifters in interferometry. However, they typically present a non-linear behavior and strong hysteresis. The use of an additional resistive or capacitive sensor make possible to linearize the response of the PZT by feedback control. This approach works well, but makes the device more complex and expensive. A less expensive approach uses a non-linear calibration. In this paper, the authors used data from at least five interferograms to form N-dimensional Lissajous figures to establish the actual relationship between the applied voltages and the resulting phase shifts [1]. N-dimensional Lissajous figures are formed when N sinusoidal signals are combined in an N-dimensional space, where one signal is assigned to each axis. It can be verified that the resulting Ndimensional ellipsis lays in a 2D plane. By fitting an ellipsis equation to the resulting 2D ellipsis it is possible to accurately compute the resulting phase value for each interferogram. In this paper, the relationship between the resulting phase shift and the applied voltage is simultaneously established for a set of 12 increments by a fourth degree polynomial. The results in speckle interferometry show that, after two or three interactions, the calibration error is usually smaller than 1°.
High-resolution fiber optic temperature sensors using nonlinear spectral curve fitting technique.
Su, Z H; Gan, J; Yu, Q K; Zhang, Q H; Liu, Z H; Bao, J M
2013-04-01
A generic new data processing method is developed to accurately calculate the absolute optical path difference of a low-finesse Fabry-Perot cavity from its broadband interference fringes. The method combines Fast Fourier Transformation with nonlinear curve fitting of the entire spectrum. Modular functions of LabVIEW are employed for fast implementation of the data processing algorithm. The advantages of this technique are demonstrated through high performance fiber optic temperature sensors consisting of an infrared superluminescent diode and an infrared spectrometer. A high resolution of 0.01 °C is achieved over a large dynamic range from room temperature to 800 °C, limited only by the silica fiber used for the sensor.
NASA Astrophysics Data System (ADS)
Liu, Xuan-Zuo; Tian, Dong-Ping; Chong, Bo
2016-06-01
Liu et al. [Phys. Rev. Lett. 90(17), 170404 (2003)] proved that the characters of transition probabilities in the adiabatic limit should be entirely determined by the topology of energy levels and the stability of fixed points in the classical Hamiltonian system, according to the adiabatic theorem. In the special case of nonlinear Landau-Zener model, we simplify their results to be that the properties of transition probabilities in the adiabatic limit should just be determined by the attributes of fixed points. It is because the topology of energy levels is governed by the behavior and symmetries of fixed points, and intuitively this fact is represented as a correspondence between energy levels and evolution curves of the fixed points which can be quantitatively described as the same complexity numbers.
ERIC Educational Resources Information Center
Blanchard, Frank N.
1980-01-01
Describes a FORTRAN IV program written to supplement a laboratory exercise dealing with quantitative x-ray diffraction analysis of mixtures of polycrystalline phases in an introductory course in x-ray diffraction. Gives an example of the use of the program and compares calculated and observed calibration data. (Author/GS)
Scaling the Non-linear Impact Response of Flat and Curved Composite Panels
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Chunchu, Prasad B.; Rose, Cheryl A.; Feraboli, Paolo; Jackson, Wade C.
2005-01-01
The application of scaling laws to thin flat and curved composite panels exhibiting nonlinear response when subjected to low-velocity transverse impact is investigated. Previous research has shown that the elastic impact response of structural configurations exhibiting geometrically linear response can be effectively scaled. In the present paper, a preliminary experimental study is presented to assess the applicability of the scaling laws to structural configurations exhibiting geometrically nonlinear deformations. The effect of damage on the scalability of the structural response characteristics, and the effect of scale on damage development are also investigated. Damage is evaluated using conventional methods including C-scan, specimen de-plying and visual inspection of the impacted panels. Coefficient of restitution and normalized contact duration are also used to assess the extent of damage. The results confirm the validity of the scaling parameters for elastic impacts. However, for the panels considered in the study, the extent and manifestation of damage do not scale according to the scaling laws. Furthermore, the results indicate that even though the damage does not scale, the overall panel response characteristics, as indicated by contact force profiles, do scale for some levels of damage.
Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve
Fong, Youyi; Yin, Shuxin; Huang, Ying
2016-01-01
In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981
Su, Chiu-Wen; Ming-Fang Yen, Amy; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng
2017-07-28
Background The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area receiver operating characteristics (AUROC) curve, but how the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiological study, required for constructing a prediction model that affects its performance has not been researched yet. Methods We conducted a two-stage design by first proposing a validation study to calibrate the CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected the performance of the updated prediction model was quantified by comparing the AUROC curves between the original and the updated model. Results The estimates regarding the calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, the clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% CI: 61.7%-63.6%) for the non-updated model to 68.9% (95% CI: 68.0%-69.6%) for the updated one, reaching a statistically significant difference (P < 0.05). Conclusions We demonstrated an improvement in the updated prediction model for periodontal disease as measured by the calibrated CPI derived from a large epidemiological survey.
Lin, Chung-Yon; Lim, Stephanie; Anslyn, Eric V
2016-07-06
Linear free energy relationship (LFER) parameters are routinely used to parametrize physicochemical effects while investigating reaction mechanisms. In this Communication, we describe an alternate application for LFERs: training sets for model building in an analytical application. In this study, the sterics, quantified by Charton parameters (Δv), of nine secondary chiral alcohol analytes were correlated to the circular dichroism output from a chiral alcohol optical sensor. To test the validity of the model, the correlative linear model was applied to determine the enantiomeric excess of samples of two alcohols without a priori knowledge of a calibration curve. The error in this method was comparable to those of previous experimental methods (<5%).
Peris Conejero, T; Olivares Pallerols, R; Moreno Frigols, J L
2009-01-01
immunoradiometric assay (IRMA) is one of the principal methods used for the analytical determination of neuron specific enolase (NSE) concentration. We studied the influence of temperature on the calibration curves obtained by this method, and a physicochemical justification based on two theoretical models is proposed. we used a commercially available RIA kit for NSE and a gamma counter. Data was analysed using Statistical software. activity bound to the antibody increases with temperature, producing results that are consistent with two modifications to the four parameter and Langmuir equations. the two models used successfully reproduce the results, with the adsorption model being preferable due to its greater simplicity and clearer physical significance.
Lin, Ying-Tsong; McMahon, Kara G; Lynch, James F; Siegmann, William L
2013-01-01
The acoustic ducting effect by curved nonlinear gravity waves in shallow water is studied through idealized models in this paper. The internal wave ducts are three-dimensional, bounded vertically by the sea surface and bottom, and horizontally by aligned wavefronts. Both normal mode and parabolic equation methods are taken to analyze the ducted sound field. Two types of horizontal acoustic modes can be found in the curved internal wave duct. One is a whispering-gallery type formed by the sound energy trapped along the outer and concave boundary of the duct, and the other is a fully bouncing type due to continual reflections from boundaries in the duct. The ducting condition depends on both internal-wave and acoustic-source parameters, and a parametric study is conducted to derive a general pattern. The parabolic equation method provides full-field modeling of the sound field, so it includes other acoustic effects caused by internal waves, such as mode coupling/scattering and horizontal Lloyd's mirror interference. Two examples are provided to present internal wave ducts with constant curvature and meandering wavefronts.
De Mello, Fernanda; Oliveira, Carlos A L; Ribeiro, Ricardo P; Resende, Emiko K; Povh, Jayme A; Fornari, Darci C; Barreto, Rogério V; McManus, Concepta; Streit, Danilo
2015-01-01
Was evaluated the pattern of growth among females and males of tambaqui by Gompertz nonlinear regression model. Five traits of economic importance were measured on 145 animals during the three years, totaling 981 morphometric data analyzed. Different curves were adjusted between males and females for body weight, height and head length and only one curve was adjusted to the width and body length. The asymptotic weight (a) and relative growth rate to maturity (k) were different between sexes in animals with ± 5 kg; slaughter weight practiced by a specific niche market, very profitable. However, there was no difference between males and females up to ± 2 kg; slaughter weight established to supply the bigger consumer market. Females showed weight greater than males (± 280 g), which are more suitable for fish farming purposes defined for the niche market to larger animals. In general, males had lower maximum growth rate (8.66 g / day) than females (9.34 g / day), however, reached faster than females, 476 and 486 days growth rate, respectively. The height and length body are the traits that contributed most to the weight at 516 days (P <0.001).
NASA Technical Reports Server (NTRS)
Bennett, J.; Hall, P.; Smith, F. T.
1988-01-01
Viscous fluid flows with curved streamlines can support both centrifugal and viscous traveling wave instabilities. Here the interaction of these instabilities in the context of the fully developed flow in a curved channel is discussed. The viscous (Tollmein-Schlichting) instability is described asymptotically at high Reynolds numbers and it is found that it can induce a Taylor-Goertler flow even at extremely small amplitudes. In this interaction, the Tollmein-Schlichting wave can drive a vortex state with wavelength either comparable with the channel width or the wavelength of lower branch viscous modes. The nonlinear equations which describe these interactions are solved for nonlinear equilibrium states.
Mattick, K L; Legan, J D; Humphrey, T J; Peleg, M
2001-05-01
Salmonella cells in two sugar-rich media were heat treated at various constant temperatures in the range of 55 to 80 degrees C and their survival ratios determined at various time intervals. The resulting nonlinear semilogarithmic survival curves are described by the model log10S(t) = -b(T)tn(T), where S(t) is the momentary survival ratio N(t)/N0, and b(T) and n(T) are coefficients whose temperature dependence is described by two empirical mathematical models. When the temperature profile, T(t), of a nonisothermal heat treatment can also be expressed algebraically, b(T) and n(T) can be transformed into a function of time, i.e., b[T(t)] and n[T(t)]. If the momentary inactivation rate primarily depends on the momentary temperature and survival ratio, then the survival curve under nonisothermal conditions can be constructed by solving a differential equation, previously suggested by Peleg and Penchina, whose coefficients are expressions that contain the corresponding b[T(t)] and n[T(t)] terms. The applicability of the model and its underlying assumptions was tested with a series of eight experiments in which the Salmonella cells, in the same media, were heated at various rates to selected temperatures in the range of 65 to 80 degres C and then cooled. In all the experiments, there was an agreement between the predicted and observed survival curves. This suggests that, at least in the case of Salmonella in the tested media, survival during nonisothermal inactivation can be estimated without assuming any mortality kinetics.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120
NASA Astrophysics Data System (ADS)
Tao, S.; Trzasko, J. D.; Gunter, J. L.; Weavers, P. T.; Shu, Y.; Huston, J., III; Lee, S. K.; Tan, E. T.; Bernstein, M. A.
2017-01-01
Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer’s Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to
Vonhoff, Sebastian; Condliffe, Jamie; Schiffter, Heiko
2010-01-05
The aim of this study was to develop a quick and objective method for the determination of changes in protein secondary structure by Fourier transform infrared spectroscopy (FTIR). Structural shifts from native regions (alpha-helix, intramolecular beta-sheet) to aggregated strands (intermolecular beta-sheet) were used to evaluate protein damage. FTIR spectra of 16 different proteins were recorded and quantified by peak fitting of the non-deconvolved and baseline corrected amide I bands. The resulting percentile secondary structures were correlated with the shape and intensity of the area normalized amide I bands using an interval partial least squares algorithm (iPLS). Structural elements were focused on the following regions: alpha-helix 1660-1650 cm(-1), intramolecular beta-sheet 1695-1683 cm(-1) and 1644-1620 cm(-1), intermolecular beta-sheet 1620-1595 cm(-1). Three calibration curves were created from the data sets. Calculated alpha-helix content ranged from 0% to 79.59%, intramolecular beta-sheet from 10.64% to 63.89% and intermolecular beta-sheet from 0.23% to 9.70%. The linear relationship between actual values (as determined by peak fitting) and calculated values was evaluated by correlation coefficient and root mean square error of calibration while cross-validation was performed to detect possible outliers. Results were verified by including two proteins as validation standards and comparing the calculated values to peak fitting and X-ray data. Structural changes of human serum albumin (HSA) due to elevated temperatures and the fibrillation of glucagon were quantified by calibration curve analysis. Performance and reliability of the iPLS algorithm were evaluated by comparing calculated secondary structure elements with results from peak fitting and circular dichroism. Different methods for the determination of secondary structure gave slightly different results but overall tendencies concurred. Additionally, formation of HSA aggregates could be linked to
NASA Astrophysics Data System (ADS)
Schwartz, Andrew J.; Ray, Steven J.; Hieftje, Gary M.
2015-03-01
Two methods are described that enable on-line generation of calibration standards and standard additions in solution-cathode glow discharge optical emission spectrometry (SCGD-OES). The first method employs a gradient high-performance liquid chromatography pump to perform on-line mixing and delivery of a stock standard, sample solution, and diluent to achieve a desired solution composition. The second method makes use of a simpler system of three peristaltic pumps to perform the same function of on-line solution mixing. Both methods can be computer-controlled and automated, and thereby enable both simple and standard-addition calibrations to be rapidly performed on-line. Performance of the on-line approaches is shown to be comparable to that of traditional methods of sample preparation, in terms of calibration curves, signal stability, accuracy, and limits of detection. Potential drawbacks to the on-line procedures include signal lag between changes in solution composition and pump-induced multiplicative noise. Though the new on-line methods were applied here to SCGD-OES to improve sample throughput, they are not limited in application to only SCGD-OES-any instrument that samples from flowing solution streams (flame atomic absorption spectrometry, ICP-OES, ICP-mass spectrometry, etc.) could benefit from them.
NASA Astrophysics Data System (ADS)
Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia
2016-09-01
In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining
Combining biomarkers linearly and nonlinearly for classification using the area under the ROC curve.
Fong, Youyi; Yin, Shuxin; Huang, Ying
2016-09-20
In biomedical studies, it is often of interest to classify/predict a subject's disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on area under the receiver operating characteristic curve (AUC). Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in suboptimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data are generated from a semiparametric generalized linear model, just as the smoothed AUC method. Through simulation studies and real data examples, we demonstrate that RAUC outperforms smooth AUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
SDSS J14584479+3720215: A BENCHMARK JHK{sub S} BLAZAR LIGHT CURVE FROM THE 2MASS CALIBRATION SCANS
Davenport, James R. A.; Ruan, John J.; Becker, Andrew C.; Macleod, Chelsea L.; Cutri, Roc M.
2015-04-10
Active galactic nuclei (AGNs) are well-known to exhibit flux variability across a wide range of wavelength regimes, but the precise origin of the variability at different wavelengths remains unclear. To investigate the relatively unexplored near-IR (NIR) variability of the most luminous AGNs, we conduct a search for variability using well sampled JHK{sub s}-band light curves from the Two Micron All Sky Survey (2MASS) calibration fields. Our sample includes 27 known quasars with an average of 924 epochs of observation over three years, as well as one spectroscopically confirmed blazar (SDSS J14584479+3720215) with 1972 epochs of data. This is the best-sampled NIR photometric blazar light curve to date, and it exhibits correlated, stochastic variability that we characterize with continuous auto-regressive moving average (CARMA) models. None of the other 26 known quasars had detectable variability in the 2MASS bands above the photometric uncertainty. A blind search of the 2MASS calibration field light curves for active galactic nucleus (AGN) candidates based on fitting CARMA(1,0) models (damped-random walk) uncovered only seven candidates. All seven were young stellar objects within the ρ Ophiuchus star forming region, five with previous X-ray detections. A significant γ-ray detection (5σ) for the known blazar using 4.5 yr of Fermi photon data is also found. We suggest that strong NIR variability of blazars, such as seen for SDSS J14584479+3720215, can be used as an efficient method of identifying previously unidentified γ-ray blazars, with low contamination from other AGNs.
Jeong, Hyunjo; Barnard, Daniel; Cho, Sungjong; Zhang, Shuzeng; Li, Xiongbing
2017-11-01
This paper presents analytical and experimental techniques for accurate determination of the nonlinearity parameter (β) in thick solid samples. When piezoelectric transducers are used for β measurements, the receiver calibration is required to determine the transfer function from which the absolute displacement can be calculated. The measured fundamental and second harmonic displacement amplitudes should be modified to account for beam diffraction and material absorption. All these issues are addressed in this study and the proposed technique is validated through the β measurements of thick solid samples. A simplified self-reciprocity calibration procedure for a broadband receiver is described. The diffraction and attenuation corrections for the fundamental and second harmonics are explicitly derived. Aluminum alloy samples in five different thicknesses (4, 6, 8, 10, 12cm) are prepared and β measurements are made using the finite amplitude, through-transmission method. The effects of diffraction and attenuation corrections on β measurements are systematically investigated. When diffraction and attenuation corrections are all properly made, the variation of β between different thickness samples is found to be less than 3.2%. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
NASA Astrophysics Data System (ADS)
Duc, Nguyen Dinh; Quan, Tran Quoc
2013-11-01
The nonlinear response of buckling and posbuckling of imperfect thin functionally graded doubly curved thin shallow shells resting on elastic foundations and subjected to some mechanical loads is investigated analytically. The elastic moduli of materials, Young's modulus, and Poisson ratio are all graded in the shell thickness direction according to a simple power-law in terms of volume fractions of constituents. All formulations are based on the classical theory of shells with account of geometrical nonlinearity, an initial geometrical imperfection, and a Pasternak-type elastic foundation. By employing the Galerkin method, explicit relations for the load-deflection curves of simply supported doubly curved shallow FGM shells are determined. The effects of material and geometrical properties, foundation stiffness, and imperfection of shells on the buckling and postbuckling loadcarrying capacity of spherical and cylindrical shallow FGM shells are analyzed and discussed.
NASA Astrophysics Data System (ADS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2013-06-01
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem.
Que, Tran; Duy, Pham Ngoc; Luyen, Bui Thi Kim
2016-01-01
To develop a calibration curve for induction of dicentric chromosomes by radiation, we have used a 60Co gamma-ray source with dose rate of 12.5 mGy/s. Whole blood from 15 healthy donors was collected. Whole blood from each donor was divided equally into 8 parts for exposing to supposedly physical doses 0, 0.30, 0.50, 1.00, 1.50, 2.00, 3.00 and 4.00 Gy for a independent calibration curve. Whole blood from 15 donors was used to calibrate dose – effect and statistical for general calibration curve. Using Poisson test (U-test) for the distribution of dicentric chromosomes in the metaphases to determine the uniformity of the radiation field. The average from 15 independent calibration curves of linear correlated coefficient was determined to be r (y, d) = 0.5136 ± 0.0038. The model equation derived is y = aD + bD2 + C. The calibration equation of dose-effect was y = 1.01D + 4.43D2 + 0.56. PMID:28217278
Que, Tran; Duy, Pham Ngoc; Luyen, Bui Thi Kim
2016-01-01
To develop a calibration curve for induction of dicentric chromosomes by radiation, we have used a 60Co gamma-ray source with dose rate of 12.5 mGy/s. Whole blood from 15 healthy donors was collected. Whole blood from each donor was divided equally into 8 parts for exposing to supposedly physical doses 0, 0.30, 0.50, 1.00, 1.50, 2.00, 3.00 and 4.00 Gy for a independent calibration curve. Whole blood from 15 donors was used to calibrate dose - effect and statistical for general calibration curve. Using Poisson test (U-test) for the distribution of dicentric chromosomes in the metaphases to determine the uniformity of the radiation field. The average from 15 independent calibration curves of linear correlated coefficient was determined to be r (y, d) = 0.5136 ± 0.0038. The model equation derived is y = aD + bD(2) + C. The calibration equation of dose-effect was y = 1.01D + 4.43D(2) + 0.56.
Mass estimation of shaped charge jets from x-ray shadow graph with new calibration curve method
NASA Astrophysics Data System (ADS)
Saito, Fumikazu; Kishimura, Hiroaki; Kumakura, Akira; Sakai, Shun
2015-06-01
In order to assess the penetration capability of the Al and Cu metal jets against a bumper structure (such as Al plate and/or Al block), we measured the initial formation process of the metal jets generated from conical shaped charge device. The shaped charge device configurations employed in the experimental and numerical investigations have conical aluminum (and cupper) liner and steel casing with PBX explosive charge. The profile and velocity of the jets are measured with flash x-ray and x-ray film system. The mass of the jet tip are estimated from x-ray images by a calibration curve method proposed by our group. Al targets are used to evaluate a penetration performance of the jets. Additionally, we have simulated the initial formation process of the shaped charge jets with Autodyne-2D hydrodynamic code, which proposed important data to compare the experimental one.
Burns, Malcolm J; Nixon, Gavin J; Foy, Carole A; Harris, Neil
2005-01-01
Background As real-time quantitative PCR (RT-QPCR) is increasingly being relied upon for the enforcement of legislation and regulations dependent upon the trace detection of DNA, focus has increased on the quality issues related to the technique. Recent work has focused on the identification of factors that contribute towards significant measurement uncertainty in the real-time quantitative PCR technique, through investigation of the experimental design and operating procedure. However, measurement uncertainty contributions made during the data analysis procedure have not been studied in detail. This paper presents two additional approaches for standardising data analysis through the novel application of statistical methods to RT-QPCR, in order to minimise potential uncertainty in results. Results Experimental data was generated in order to develop the two aspects of data handling and analysis that can contribute towards measurement uncertainty in results. This paper describes preliminary aspects in standardising data through the application of statistical techniques to the area of RT-QPCR. The first aspect concerns the statistical identification and subsequent handling of outlying values arising from RT-QPCR, and discusses the implementation of ISO guidelines in relation to acceptance or rejection of outlying values. The second aspect relates to the development of an objective statistical test for the comparison of calibration curves. Conclusion The preliminary statistical tests for outlying values and comparisons between calibration curves can be applied using basic functions found in standard spreadsheet software. These two aspects emphasise that the comparability of results arising from RT-QPCR needs further refinement and development at the data-handling phase. The implementation of standardised approaches to data analysis should further help minimise variation due to subjective judgements. The aspects described in this paper will help contribute towards the
Multigrid solution of the nonlinear Poisson-Boltzmann equation and calculation of titration curves.
Oberoi, H; Allewell, N M
1993-01-01
Although knowledge of the pKa values and charge states of individual residues is critical to understanding the role of electrostatic effects in protein structure and function, calculating these quantities is challenging because of the sensitivity of these parameters to the position and distribution of charges. Values for many different proteins which agree well with experimental results have been obtained with modified Tanford-Kirkwood theory in which the protein is modeled as a sphere (reviewed in Ref. 1); however, convergence is more difficult to achieve with finite difference methods, in which the protein is mapped onto a grid and derivatives of the potential function are calculated as differences between the values of the function at grid points (reviewed in Ref. 6). Multigrid methods, in which the size of the grid is varied from fine to coarse in several cycles, decrease computational time, increase rates of convergence, and improve agreement with experiment. Both the accuracy and computational advantage of the multigrid approach increase with grid size, because the time required to achieve a solution increases slowly with grid size. We have implemented a multigrid procedure for solving the nonlinear Poisson-Boltzmann equation, and, using lysozyme as a test case, compared calculations for several crystal forms, different refinement procedures, and different charge assignment schemes. The root mean square difference between calculated and experimental pKa values for the crystal structure which yields best agreement with experiment (1LZT) is 1.1 pH units, with the differences in calculated and experimental pK values being less than 0.6 pH units for 16 out of 21 residues. The calculated titration curves of several residues are biphasic. Images FIGURE 8 PMID:8369451
NASA Astrophysics Data System (ADS)
Siade, A. J.; Prommer, H.; Welter, D.
2014-12-01
Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site
Lattes, S; Appert-Flory, A; Fischer, F; Jambou, D; Toulon, P
2011-01-01
Coagulation factor VIII (FVIII) is usually evaluated using activated partial thromboplastin time-based one-stage clotting assays. Guidelines for clotting factor assays indicate that a calibration curve should be included each time the assay is performed. Therefore, FVIII measurement is expensive, reagent- and time-consuming. The aim of this study was to compare FVIII activities obtained using the same fully automated assay that was calibrated once (stored calibration curve) or each time the assay was performed. Unique lots of reagents were used throughout the study. We analysed 255 frozen plasma samples from patients who were prescribed FVIII measurement including treated and untreated haemophilia A patients. Twenty-six runs were performed on a 28-week period, each including four lyophilized control and at most 10 patient plasma samples. In control samples, FVIII activities were not significantly different when the assay was performed using the stored calibration curve or was daily calibrated. The same applied to FVIII activities in patient plasma samples that were not significantly different throughout the measuring range of activities [68.3% (<1-179) vs. 67.6% (<1-177), P=0.48] and no relevant bias could be demonstrated when data were compared according to Bland and Altman. These results suggest that in the studied technical conditions, performing the FVIII assay using a stored calibration curve is reliable, for at least 6 months. Therefore, as far as the same lots of reagents are used, it is not mandatory to include a calibration curve each time the FVIII assay was performed. However, this strategy has to be validated if the assay is performed in different technical conditions. © 2010 Blackwell Publishing Ltd.
Green, M.I.; Nelson, D.; Marks, S.; Gee, B.; Wong, W.; Meneghetti, J.
1989-03-01
A matched pair of curved integral coils has been designed, fabricated and calibrated at Lawrence Berkeley Laboratory for measuring Advanced Light Source (ALS) Booster Dipole Magnets. Distinctive fabrication and calibration techniques are described. The use of multifilar magnet wire in fabrication integral search coils is described. Procedures used and results of AC and DC measurements of transfer function, effective length and uniformity of the prototype booster dipole magnet are presented in companion papers. 8 refs.
NASA Astrophysics Data System (ADS)
Sjögren, Torbjörn; Johansson, Arne V.
2000-06-01
A simple and straightforward method is presented for the derivation and calibration of algebraic nonlinear models for terms in Reynolds stress turbulence closures. The method extensively utilizes data from direct numerical simulations to allow an investigation of the model performance over the entire Reynolds stress anisotropy-invariant map. The model constants are determined from the condition of minimizing the mean square error over the invariant map, in order to give good model behavior for as wide a class as possible of flow situations. A low Reynolds number closure is proposed based on the most general form for closing the Reynolds stress transport equations in terms of Reynolds stresses and total dissipation rate. It is shown that forcing the closure to satisfy realizability in a strict sense leads to a good model behavior even for the complicated flow situation near a wall, without any use of ad-hoc wall damping functions in the closure. The model behavior in homogeneous turbulent flow is analyzed by formulating equations for invariant measures, yielding several quite general results for the behavior of the present and other existing models. A new approach to the modeling effects of rotation in the context of Reynolds stress closures is presented and tested for some different homogeneous flows subjected to rotation.
Abdolmaleki, Azizeh; Ghasemi, Jahan B; Shiri, Fereshteh; Pirhadi, Somayeh
2015-01-01
Data manipulation and maximum efficient extraction of useful information need a range of searching, modeling, mathematical, and statistical approaches. Hence, an adequate multivariate characterization is the first necessary step in investigation and the results are interpreted after multivariate analysis. Multivariate data analysis is capable of not only large dataset management but also interpret them surely and rapidly. Application of chemometrics and cheminformatics methods may be useful for design and discovery of new drug compounds. In this review, we present a variety of information sources on chemometrics, which we consider useful in different fields of drug design. This review describes exploratory analysis (PCA), classification and multivariate calibration (PCR, PLS) methods to data analysis. It summarizes the main facts of linear and nonlinear multivariate data analysis in drug discovery and provides an introduction to manipulation of data in this field. It handles the fundamental aspects of basic concepts of multivariate methods, principles of projections (PCA and PLS) and introduces the popular modeling and classification techniques. Enough theory behind these methods, more particularly concerning the chemometrics tools is included for those with little experience in multivariate data analysis techniques such as PCA, PLS, SIMCA, etc. We describe each method by avoiding unnecessary equations, and details of calculation algorithms. It provides a synopsis of the method followed by cases of applications in drug design (i.e., QSAR) and some of the features for each method.
NASA Astrophysics Data System (ADS)
Tian, Shun-Qiang; Zhang, Wen-Zhi; Li, Hao-Hu; Zhang, Man-Zhou; Hou, Jie; Zhou, Xue-Mei; Liu, Gui-Min
2009-06-01
Phase I commissioning of the SSRF storage ring on 3.0 GeV beam energy was started at the end of December 2007. A lot of encouraging results have been obtained so far. In this paper, calibrations of the linear optics during the commissioning are discussed, and some measured results about the nonlinearity given. Calibration procedure emphasizes correcting quadrupole magnetic coefficients with the Linear Optics from Closed Orbit (LOCO) technique. After fitting the closed orbit response matrix, the linear optics of the four test modes is substantially corrected, and the measured physical parameters agree well with the designed ones.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Jurado, J M; Alcázar, A; Muñiz-Valencia, R; Ceballos-Magaña, S G; Raposo, F
2017-09-01
Since linear calibration is mostly preferred for analytical determinations, linearity in the calibration range is an important performance characteristic of any instrumental analytical method. Linearity can be proved by applying several graphical and numerical approaches. The principal graphical criteria are visual inspection of the calibration plot, the residuals plot, and the response factors plot, also called sensitivity or linearity plot. All of them must include confidence limits in order to visualize linearity deviations. In this work, the graphical representation of percent relative errors of back-calculated concentrations against the concentration of the calibration standards is proposed as linearity criterion. This graph considers a confidence interval based on the expected recovery related to the concentration level according to AOAC approach. To illustrate it, four calibration examples covering different analytical techniques and calibration situations have been studied. The proposed %RE graph was useful in all examples, helping to highlight problems related to non-linear behavior such as points with high leverage and deviations from linearity at the extremes of the calibration range. By this way, a numerical decision limit which takes into account the concentration of calibration standards can be easily included as linearity criterion in the form of %RETh=2·C(-0.11). Accordingly, this %RE parameter is accurate for the decision-making related to linearity assessment according to the fitness-for-purpose approach. Copyright © 2017 Elsevier B.V. All rights reserved.
Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris
2017-01-25
The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.
NASA Astrophysics Data System (ADS)
Wilcox, Jamianne C.; Lopez, Benjamin J.; Campas, Otger; Valentine, Megan T.
2015-08-01
Optical traps allow for the precise application and measurement of pico-Newton forces in a wide variety of situations, and are particularly well suited for biophysical measurements of motor proteins and cells. Nearly all experiments exploit the linear regime of the optical trap, where force and displacement are related by a simple spring constant that does not depend on the trapped object's position. This typically limits the useful force range to < 100 pN for high-NA objective lenses and reasonable laser powers. Several biological studies require larger forces, which are not accessible in the linear regime of the trap. The best means to extend the maximum force is to make use of the entire nonlinear range; however, current techniques for calibrating the full nonlinear regime are limited. Here we report a new method for calibrating the nonlinear trap region that uses the fluctuations in the position of a trapped object when it is displaced from the center of a single gradient optical trap by controlled flow. From the position fluctuations, we measure the local trap stiffness, in both the linear and non-linear regimes. This approach requires only knowledge of the system temperature, and is especially useful for measurements involving trapped objects of unknown size, or objects in a fluid of unknown viscosity.
Issues in energy calibration, nonlinearity, and signal processing for gamma-ray microcalorimeter
Rabin, Mike W; Hoover, Andrew S; Bacrania, Mnesh K; Hoteling, Nathan; Croce, M; Karpius, P J; Ullom, J N; Bennett, D A; Horansky, R D; Vale, L R; Doriese, W B
2009-01-01
Issues regarding the energy calibration of high dynamic range microcalorimeter detector arrays are presented with respect to new results from a minor actinide-mixed oxide radioactive source. The need to move to larger arrays of such detectors necessitates the implementation of automated analysis procedures, which turn out to be nontrivial due to complex calibration shapes and pixel-to-pixel variability. Some possible avenues for improvement, including a more physics-based calibration procedure, are suggested.
Abbas, Z.; Naveed, M.; Sajid, M.
2015-10-15
In this paper, effects of Hall currents and nonlinear radiative heat transfer in a viscous fluid passing through a semi-porous curved channel coiled in a circle of radius R are analyzed. A curvilinear coordinate system is used to develop the mathematical model of the considered problem in the form partial differential equations. Similarity solutions of the governing boundary value problems are obtained numerically using shooting method. The results are also validated with the well-known finite difference technique known as the Keller-Box method. The analysis of the involved pertinent parameters on the velocity and temperature distributions is presented through graphs and tables.
NASA Technical Reports Server (NTRS)
Daudpota, Q. Isa; Hall, Philip; Zang, Thomas A.
1987-01-01
The flow in a two-dimensional curved channel driven by an azimuthal pressure gradient can become linearly unstable due to axisymmetric perturbations and/or nonaxisymmetric perturbations depending on the curvature of the channel and the Reynolds number. For a particular small value of curvature, the critical neighborhood of this curvature value and critical Reynolds number, nonlinear interactions occur between these perturbations. The Stuart-Watson approach is used to derive two coupled Landau equations for the amplitudes of these perturbations. The stability of the various possible states of these perturbations is shown through bifurcation diagrams. Emphasis is given to those cases which have relevance to external flows.
NASA Astrophysics Data System (ADS)
Abbas, Z.; Naveed, M.; Sajid, M.
2015-10-01
In this paper, effects of Hall currents and nonlinear radiative heat transfer in a viscous fluid passing through a semi-porous curved channel coiled in a circle of radius R are analyzed. A curvilinear coordinate system is used to develop the mathematical model of the considered problem in the form partial differential equations. Similarity solutions of the governing boundary value problems are obtained numerically using shooting method. The results are also validated with the well-known finite difference technique known as the Keller-Box method. The analysis of the involved pertinent parameters on the velocity and temperature distributions is presented through graphs and tables.
NASA Astrophysics Data System (ADS)
Masterlark, T.; Stone, J.; Feigl, K.
2010-12-01
The internal structure, loading processes, and effective boundary conditions of a volcano control the deformation that we observe at the Earth’s surface. Forward models of these internal structures and processes allow us to predict the surface deformation. In practice, we are faced with the inverse situation of using surface observations (e.g., InSAR and GPS) to characterize the inaccessible internal structures and processes. Distortions of these characteristics are tied to our ability to: 1) identify and resolve the internal structure; 2) simulate the internal processes over a problem domain having this internal structure; and 3) calibrate parameters that describe these internal processes to the observed deformation. Relatively simple analytical solutions for deformation sources (such as a pressurized magma chamber) embedded in a homogeneous, elastic half-space are commonly used to simulate observed volcano deformation, because they are computationally inexpensive, and thus easily integrated into inverse analyses that seek to characterize the source position and magnitude. However, the half-space models generally do not adequately represent internal distributions of material properties and complex geometric configurations, such as topography, of volcano deformational systems. These incompatibilities are known to severely bias both source parameter estimations and forward model calculations of deformation and stress. Alternatively, a Finite Element Model (FEM) can simulate the elastic response to a pressurized magma chamber over a domain having arbitrary geometry and distribution of material properties. However, the ability to impose perturbations of the source position parameters and automatically reconstruct an acceptable mesh has been an obstacle to implementing FEM-based nonlinear inverse methods to estimate the position of a deformation source. Using InSAR-observed deflation of Okmok volcano, Alaska, during its 1997 eruption as an example, we present the
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stress and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.
NASA Technical Reports Server (NTRS)
Hall, P.; Smith, F. T.
1987-01-01
It is known that a viscous fluid flow with curved streamlines can support both Tollmien-Schlichting and Taylor-Goertler instabilities. In a situation where both modes are possible on the basis of linear theory a nonlinear theory must be used to determine the effect of the interaction of the instabilities. The details of this interaction are of practical importance because of its possible catastrophic effects on mechanisms used for laminar flow control. This interaction is studied in the context of fully developed flows in curved channels. A part form technical differences associated with boundary layer growth the structures of the instabilities in this flow are very similar to those in the practically more important external boundary layer situation. The interaction is shown to have two distinct phases depending on the size of the disturbances. At very low amplitudes two oblique Tollmein-Schlichting waves interact with a Goertler vortex in such a manner that the amplitudes become infinite at a finite time. This type of interaction is described by ordinary differential amplitude equations with quadratic nonlinearities.
Li, Cheng; Zhao, Tianlun; Li, Cong; Mei, Lei; Yu, En; Dong, Yating; Chen, Jinhong; Zhu, Shuijin
2017-04-15
Near infrared (NIR) spectroscopy combined with Monte Carlo uninformative variable elimination (MC-UVE) and nonlinear calibration methods employed to determine gossypol content in cottonseeds were investigated. The reference method was performed by high performance liquid chromatography coupled to an ultraviolet detector (HPLC-UV). MC-UVE was employed to extract the effective information from the full NIR spectra. Nonlinear calibration methods were applied to establish the models compared with the linear method. The optimal model for gossypol content was obtained by MC-UVE-WLS-SVM, with root mean squares error of prediction (RMSEP) of 0.0422, coefficient of determination (R(2)) of 0.9331, and residual predictive deviation (RPD) of 3.8374, respectively, which was accurate and robust enough to substitute for traditional gossypol measurements. The nonlinear methods performed more reliable than linear method during the development of calibration models. Furthermore, MC-UVE could provide better and simpler calibration models than full spectra.
Nonlinear and snap-through responses of curved panels to intense acoustic excitation
NASA Technical Reports Server (NTRS)
Ng, C. F.
1989-01-01
Assuming a single-mode transverse displacement, a simple formula is derived for the transverse load-displacement relationship of a plate under in-plane compression. The formula is used to derive a simple analytical expression for the nonlinear dynamic response of postbuckled plates under sinusoidal or random excitation. The highly nonlinear motion of snap-through can be easily interpreted using the single-mode formula. Experimental results are obtained with buckled and cylindrical aluminum panels using discrete frequency and broadband excitation of mechanical and acoustic forces. Some important effects of the snap-through motion on the dynamic response of the postbuckled plates are described. Static tests were used to identify the deformation shape during snap-through.
On the nonlinear stability of the unsteady, viscous flow of an incompressible fluid in a curved pipe
NASA Technical Reports Server (NTRS)
Shortis, Trudi A.; Hall, Philip
1995-01-01
The stability of the flow of an incompressible, viscous fluid through a pipe of circular cross-section curved about a central axis is investigated in a weakly nonlinear regime. A sinusoidal pressure gradient with zero mean is imposed, acting along the pipe. A WKBJ perturbation solution is constructed, taking into account the need for an inner solution in the vicinity of the outer bend, which is obtained by identifying the saddle point of the Taylor number in the complex plane of the cross-sectional angle co-ordinate. The equation governing the nonlinear evolution of the leading order vortex amplitude is thus determined. The stability analysis of this flow to periodic disturbances leads to a partial differential system dependent on three variables, and since the differential operators in this system are periodic in time, Floquet theory may be applied to reduce this system to a coupled infinite system of ordinary differential equations, together with homogeneous uncoupled boundary conditions. The eigenvalues of this system are calculated numerically to predict a critical Taylor number consistent with the analysis of Papageorgiou. A discussion of how nonlinear effects alter the linear stability analysis is also given, and the nature of the instability determined.
NASA Astrophysics Data System (ADS)
Rathgeber, Christoph; Schmit, Henri; Hennemann, Peter; Hiebler, Stefan
2014-03-01
Thermal energy storage using phase change materials (PCMs) provides high storage capacities in small temperature ranges. For the design of efficient latent heat storage, the enthalpy curve of a PCM has to be measured with high precision. Measurements are most commonly performed with differential scanning calorimetry (DSC). The T-History method, however, proved to be favourable for the characterization of typical PCMs due to large samples and a measuring procedure close to conditions found in applications. As T-History calorimeters are usually individual constructions, performing a careful calibration procedure is decisive to ensure optimal measuring accuracy. We report in this paper on the calibration of a T-History calorimeter with a working range from 40 to 200 °C that was designed and built at our institute. A three-part procedure, consisting of an indium calibration, a measurement of the specific heat of copper and measurements of three solid-liquid PCMs (stearic acid, dimethyl terephthalate and d-mannitol), was performed and an advanced procedure for the correction of enthalpy curves was developed. When comparing T-History enthalpy curves to literature data and DSC step measurements, good agreement within the uncertainty limits demanded by RAL testing specifications was obtained. Thus, our design of a T-History calorimeter together with the developed calibration procedure provides the measuring accuracy that is required to identify the most suitable PCM for a given application. In addition, the dependence of the enthalpy curve on the sample size can be analysed by comparing results obtained with T-History and DSC and the behaviour of the bulk material in real applications can be predicted.
Uncertainty due to non-linearity in radiation thermometers calibrated by multiple fixed points
Yamaguchi, Y.; Yamada, Y.
2013-09-11
A new method to estimate the uncertainty due to non-linearity is described on the n= 3 scheme basis. The expression of uncertainty is mathematically derived applying the random walk method. The expression is simple and requires only the temperatures of the fixed points and a relative uncertainty value for each flux-doubling derived from the non-linearity measurement. We also present an example of the method, in which the uncertainty of temperature measurement by a radiation thermometer is calculated on the basis of non-linearity measurement.
Piecewise approximation of curves using nonlinear diffusion in scale-space
NASA Astrophysics Data System (ADS)
Pinheiro, Antonio M. G.; Ghanbari, Mohammad
2000-10-01
The emerging Multimedia Content Description Interface standard, MPEG-7, looks at the indexing and retrieval of visual information. In this context the development of shape description and shape querying tools become a fundamental and challenging task. We introduce a method based on non-linear diffusion of contours. The aim is to compute reference points in contours to provide a shape description tool. This reference points will be situated in the sharpest changes in the contour direction. Hence, they provide ideal choices to use as vertices of a polygonal approximation. If a maximum error between the original contour and the polygonal approximation is required, a scale-space procedure can help to find new vertices in order to meet this requirement. Basically, this method follows the non-linear diffusion technique of Perona and Malik. Unlike the usually linear diffusion techniques of contours, where the diffusion is made through the contour points coordinates, this method applies the diffusion in the tangent space. In this case the contour is described by the angle variation, and the non-linear diffusion procedure is applied on it. Perona and Malik model determines how strong diffusion will act on the original function, and depends of a factor K, estimated automatically. In areas with spatial concentration of strong changes of the angle this factor is also adjusted to reduce the noise effect. The proposed method has been extensively tested using the data- base contour of fish shapes in SQUID web site. A shape-based retrieval application was also tested using a similarity measure between two polygonal approximations.
Comparison of nonlinear and spline regression models for describing mule duck growth curves.
Vitezica, Z G; Marie-Etancelin, C; Bernadet, M D; Fernandez, X; Robert-Granie, C
2010-08-01
This study compared models for growth (BW) before overfeeding period for male mule duck data from 7 families of a QTL experimental design. Four nonlinear models (Gompertz, logistic, Richards, and Weibull) and a spline linear regression model were used. This study compared fixed and mixed effects models to analyze growth. The Akaike information criterion was used to evaluate these alternative models. Among the nonlinear models, the mixed effects Weibull model had the best overall fit. Two parameters, the asymptotic weight and the inflexion point age, were considered random variables associated with individuals in the mixed models. In our study, asymptotic weight had a greater effect in Akaike's information criterion reduction than inflexion point age. In this data set, the between-ducks variability was mostly explained by asymptotic BW. Comparing fixed with mixed effects models, the residual SD was reduced in about 55% in the latter, pointing out the improvement in the accuracy of estimated parameters. The mixed effects spline regression model was the second best model. Given the piecewise nature of growth, this model is able to capture different growth patterns, even with data collected beyond the asymptotic BW.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Peters, J. M.
1981-01-01
Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.
NASA Astrophysics Data System (ADS)
Moyer, D.; De Luccia, F.; Haas, E.
2016-10-01
The Joint Polar Satellite System 1 (JPSS-1) is the follow on mission to the Suomi-National Polar-orbiting Partnership (SNPP) and provides critical weather and global climate products to the user community. A primary sensor on both JPSS-1 and S-NPP is the Visible-Infrared Imaging Radiometer Suite (VIIRS) with the Reflective Solar Band (RSB), Thermal Emissive Band (TEB) and Day Night Band (DNB) imagery providing a diverse spectral range of Earth observations. These VIIRS observation are radiometrically calibrated within the Sensor Data Records (SDRs) for use in Environmental Data Record (EDR) products such as Ocean Color/Chlorophyll (OCC) and Sea Surface Temperature (SST). Spectrally the VIIRS sensor can be broken down into 4 groups: the Visible Near Infra-Red (VNIR), Short-Wave Infra-Red (SWIR), Mid- Wave Infra-Red (MWIR) and Long-Wave Infra-Red (LWIR). The SWIR spectral bands on JPSS-1 VIIRS have a nonlinear response at low light levels affecting the calibration quality where Earth scenes are dark (like oceans). This anomalous behavior was not present on S-NPP VIIRS and will be a unique feature of the JPSS-1 VIIRS sensor. This paper will show the behavior of the SWIR response non-linearity on JPSS-1 VIIRS and potential mitigation approaches to limit its impact on the SDR and EDR products.
Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus
2017-01-01
During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to
Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Kobayashi, Ikuo
2015-07-01
For X-ray diagnosis, the proper management of the entrance skin dose (ESD) is important. Recently, a small-type optically stimulated luminescence dosimeter (nanoDot OSL dosimeter) was made commercially available by Landauer, and it is hoped that it will be used for ESD measurements in clinical settings. Our objectives in the present study were to propose a method for calibrating the ESD measured with the nanoDot OSL dosimeter and to evaluate its accuracy. The reference ESD is assumed to be based on an air kerma with consideration of a well-known back scatter factor. We examined the characteristics of the nanoDot OSL dosimeter using two experimental conditions: a free air irradiation to derive the air kerma, and a phantom experiment to determine the ESD. For evaluation of the ability to measure the ESD, a calibration curve for the nanoDot OSL dosimeter was determined in which the air kerma and/or the ESD measured with an ionization chamber were used as references. As a result, we found that the calibration curve for the air kerma was determined with an accuracy of 5 %. Furthermore, the calibration curve was applied to the ESD estimation. The accuracy of the ESD obtained was estimated to be 15 %. The origin of these uncertainties was examined based on published papers and Monte-Carlo simulation. Most of the uncertainties were caused by the systematic uncertainty of the reading system and the differences in efficiency corresponding to different X-ray energies.
NASA Astrophysics Data System (ADS)
Nichols, J. M.; Trickey, S. T.; Seaver, M.; Motley, S. R.
2008-10-01
We offer a comparison of several different detectors of damage-induced nonlinearities in assessing the connectivity of a composite-to-metal bolted joint. Each detector compares the structure's measured vibrational response to surrogate data, conforming to a general model for the healthy structure. The strength of this approach to detection is that it works in the presence of certain types of varying ambient conditions and is valid for structures excited with any stationary process. Here we employ several such detectors using dynamic strain response data collected near the joint as the structure was driven using simulated wave forcing (taken from the Pierson-Moskowitz frequency distribution for wave height). In an effort to simulate in situ monitoring conditions the experiments were carried out in the presence of strongly varying temperatures. The performance of the detectors was assessed using receiver operating characteristic (ROC) curves, a well known method for displaying detection characteristics. The ROC curve is well suited to the problem of vibration-based structural health monitoring applications where quantifying false positive and false negative errors is essential. The results of this work indicate that using the estimated auto-bicoherence of the systems response produced the best overall detection performance when compared to features based on a nonlinear prediction scheme and another based on information theory. For roughly 10% false alarms, the bicoherence detector gives nearly 90% probability of detection (POD). Conversely, for several of the other detectors 5-10% false alarms leads to ˜70% POD. While the bicoherence (and bispectrum) have been used previously in the context of damage detection, this work represents the first attempt at using them in a surrogate-based detection scheme.
Ghavi Hossein-Zadeh, N
2016-02-01
In order to describe the lactation curves of milk yield (MY) and composition in buffaloes, seven non-linear mathematical equations (Wood, Dhanoa, Sikka, Nelder, Brody, Dijkstra and Rook) were used. Data were 116,117 test-day records for MY, fat (FP) and protein (PP) percentages of milk from the first three lactations of buffaloes which were collected from 893 herds in the period from 1992 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly production records of dairy buffaloes using the NLIN and MODEL procedures in SAS and the parameters were estimated. The models were tested for goodness of fit using adjusted coefficient of determination (Radj(2)), root means square error (RMSE), Durbin-Watson statistic and Akaike's information criterion (AIC). The Dijkstra model provided the best fit of MY and PP of milk for the first three parities of buffaloes due to the lower values of RMSE and AIC than other models. For the first-parity buffaloes, Sikka and Brody models provided the best fit of FP, but for the second- and third-parity buffaloes, Sikka model and Brody equation provided the best fit of lactation curve for FP, respectively. The results of this study showed that the Wood and Dhanoa equations were able to estimate the time to the peak MY more accurately than the other equations. In addition, Nelder and Dijkstra equations were able to estimate the peak time at second and third parities more accurately than other equations, respectively. Brody function provided more accurate predictions of peak MY over the first three parities of buffaloes. There was generally a positive relationship between 305-day MY and persistency measures and also between peak yield and 305-day MY, calculated by different models, within each lactation in the current study. Overall, evaluation of the different equations used in the current study indicated the potential of the non-linear models for fitting monthly productive records of buffaloes.
NASA Astrophysics Data System (ADS)
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-01
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-05
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD=0.12], 0.67-23.19 [LOD=0.13] and 0.73-25.12 [LOD=0.15] μgmL(-1) for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Saat, Ahmad; Hamzah, Zaini; Yusop, Mohammad Fariz; Zainal, Muhd Amiruddin
2010-07-07
Detection efficiency of a gamma-ray spectrometry system is dependent upon among others, energy, sample and detector geometry, volume and density of the samples. In the present study the efficiency calibration curves of newly acquired (August 2008) HPGe gamma-ray spectrometry system was carried out for four sample container geometries, namely Marinelli beaker, disc, cylindrical beaker and vial, normally used for activity determination of gamma-ray from environmental samples. Calibration standards were prepared by using known amount of analytical grade uranium trioxide ore, homogenized in plain flour into the respective containers. The ore produces gamma-rays of energy ranging from 53 keV to 1001 keV. Analytical grade potassium chloride were prepared to determine detection efficiency of 1460 keV gamma-ray emitted by potassium isotope K-40. Plots of detection efficiency against gamma-ray energy for the four sample geometries were found to fit smoothly to a general form of {epsilon} = A{Epsilon}{sup a}+B{Epsilon}{sup b}, where {epsilon} is efficiency, {Epsilon} is energy in keV, A, B, a and b are constants that are dependent on the sample geometries. All calibration curves showed the presence of a ''knee'' at about 180 keV. Comparison between the four geometries showed that the efficiency of Marinelli beaker is higher than cylindrical beaker and vial, while cylindrical disk showed the lowest.
Horizontal Lloyd mirror patterns from straight and curved nonlinear internal waves.
McMahon, K G; Reilly-Raska, L K; Siegmann, W L; Lynch, James F; Duda, T F
2012-02-01
Experimental observations and theoretical studies show that nonlinear internal waves occur widely in shallow water and cause acoustic propagation effects including ducting and mode coupling. Horizontal ducting results when acoustic modes travel between internal wave fronts that form waveguide boundaries. For small grazing angles between a mode trajectory and a front, an interference pattern may arise that is a horizontal Lloyd mirror pattern. An analytic description for this feature is provided along with comparisons between results from the formulated model predicting a horizontal Lloyd mirror pattern and an adiabatic mode parabolic equation. Different waveguide models are considered, including boxcar and jump sound speed profiles where change in sound speed is assumed 12 m/s. Modifications to the model are made to include multiple and moving fronts. The focus of this analysis is on different front locations relative to the source as well as on the number of fronts and their curvatures and speeds. Curvature influences mode incidence angles and thereby changes the interference patterns. For sources oriented so that the front appears concave, the areas with interference patterns shrink as curvature increases, while convexly oriented fronts cause patterns to expand.
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
NASA Astrophysics Data System (ADS)
Chang, Liyun; Ho, Sheng-Yow; Lee, Tsair-Fwu; Yeh, Shyh-An; Ding, Hueisch-Jy; Chen, Pang-Yu
2015-03-01
EBT2 film is a convenient dosimetry quality-assurance (QA) tool with high 2D dosimetry resolution and a self-development property for use in verifications of radiation therapy treatment planning and special projects; however, the user will suffer from a relatively higher degree of uncertainty (more than ±6% by Hartmann et al. [29]), and the trouble of cutting one piece of film into small pieces and then reintegrating them each time. To prevent this tedious cutting work, and save calibration time and budget, a dose range analysis is presented in this study for EBT2 film calibration using the Percentage-Depth-Dose (PDD) method. Different combinations of the three dose ranges, 9-26 cGy, 33-97 cGy and 109-320 cGy, with two types of curve fitting algorithms, film pixel values and net optical densities converting into doses, were tested and compared. With the lowest error and acceptable inaccuracy of less than 3 cGy for the clinical dose range (9-320 cGy), a single film calibrated by the net optical density algorithm with the dose range 109-320 cGy was suggested for routine calibration.
Afkhami, Abbas; Abbasi-Tarighat, Maryam; Bahram, Morteza; Abdollahi, Hamid
2008-04-21
This work presents a new and simple strategy for solving matrix effects using combination of H-point curve isolation method (HPCIM) and H-point standard addition method (HPSAM). The method uses spectrophotometric multivariate calibration data constructed by successive standard addition of an analyte into an unknown matrix. By successive standard addition of the analyte, the concentrations of remaining components (interferents) remain constant and therefore give constant cumulative spectrum for interferents in the unknown mixture. The proposed method firstly extracts such spectrum using H-point curve isolation method and then applies the obtained cumulative interferents spectrum for determination of analyte by H-point standard addition method. In order to evaluate the applicability of the method a simulated as well as several experimental data sets were tested. The method was then applied to the determination of paracetamol in pharmaceutical tablets and copper in urine samples and in a copper alloy.
Rohácek, J; Semrád, V; Klierová, E; Zápotocná, M
1991-01-01
A method for the immunoturbidimetric analysis of the C-3-component in the complement system was elaborated by means of the antiserum Q-SwAHu/C3 USOL (SEVAG) Praha. A diluted human control serum USOL (SEVAG) Praha with declared values of plasma proteins was applied as a standard solution. The relation between concentration and absorption in an eight step calibration series is well described by a parabola of the 2nd degree. The precision in series and the accuracy of the method are mentioned. The proposed technique is in a relatively good correlation with the radial immunodiffusion according to MANCINI.
Taghavi Moghaddam, Pooria; Pipelzadeh, Mohammad Reza; Nesioonpour, Sholeh; Saki, Nader; Rezaee, Saeed
2014-12-01
The aim of this study was to select the best calibration model for determination of propofol plasma concentration by high-performance liquid chromatography method. Determination of propofol in plasma after deproteinization with acetonitrile containing thymol (as internal standard) was carried out on a C18 column with a mixture of acetonitrile and trifluoroacetic acid 0.1% (60:40) as mobile phase which delivered at the flow rate of 1.2 mL/minute . Fluorescence detection was done at the excitation and emission wavelengths of 276 and 310 nm, respectively. After fitting different equations to the calibration data using weighted regression, the adequacy of models were assessed by lack-of-fit test, significance of all model parameters, adjusted coefficient of determination (R(2) adjusted) and by measuring the predictive performance with median relative prediction error and median absolute relative prediction error of the validation data set. The best model was a linear equation without intercept with median relative prediction error and median absolute relative prediction error of 4.0 and 9.4%, respectively in the range of 10-5000 ng/mL. The method showed good accuracy and precision. The presented statistical framework could be used to choose the best model for heteroscedastic calibration data for analytes like propofol with wide range of expected concentration.
Zhang, J George; Ho, Thuy; Callendrello, Alanna L; Clark, Robert J; Santone, Elizabeth A; Kinsman, Sarah; Xiao, Deqing; Fox, Lisa G; Einolf, Heidi J; Stresser, David M
2014-09-01
Cytochrome P450 (P450) induction is often considered a liability in drug development. Using calibration curve-based approaches, we assessed the induction parameters R3 (a term indicating the amount of P450 induction in the liver, expressed as a ratio between 0 and 1), relative induction score, Cmax/EC50, and area under the curve (AUC)/F2 (the concentration causing 2-fold increase from baseline of the dose-response curve), derived from concentration-response curves of CYP3A4 mRNA and enzyme activity data in vitro, as predictors of CYP3A4 induction potential in vivo. Plated cryopreserved human hepatocytes from three donors were treated with 20 test compounds, including several clinical inducers and noninducers of CYP3A4. After the 2-day treatment, CYP3A4 mRNA levels and testosterone 6β-hydroxylase activity were determined by real-time reverse transcription polymerase chain reaction and liquid chromatography-tandem mass spectrometry analysis, respectively. Our results demonstrated a strong and predictive relationship between the extent of midazolam AUC change in humans and the various parameters calculated from both CYP3A4 mRNA and enzyme activity. The relationships exhibited with non-midazolam in vivo probes, in aggregate, were unsatisfactory. In general, the models yielded better fits when unbound rather than total plasma Cmax was used to calculate the induction parameters, as evidenced by higher R(2) and lower root mean square error (RMSE) and geometric mean fold error. With midazolam, the R3 cut-off value of 0.9, as suggested by US Food and Drug Administration guidance, effectively categorized strong inducers but was less effective in classifying midrange or weak inducers. This study supports the use of calibration curves generated from in vitro mRNA induction response curves to predict CYP3A4 induction potential in human. With the caveat that most compounds evaluated here were not strong inhibitors of enzyme activity, testosterone 6β-hydroxylase activity was
NASA Astrophysics Data System (ADS)
Sallé, Béatrice; Cremers, David A.; Maurice, Sylvestre; Wiens, Roger C.
2005-04-01
Recently, there has been an increasing interest in the laser-induced breakdown spectroscopy (LIBS) technique for stand-off detection of geological samples for use on landers and rovers to Mars, and for other space applications. For space missions, LIBS analysis capabilities must be investigated and instrumental development is required to take into account constraints such as size, weight, power and the effect of environmental atmosphere (pressure and ambient gas) on flight instrument performance. In this paper, we study the in-situ LIBS method at reduced pressure (7 Torr CO2 to simulate the Martian atmosphere) and near vacuum (50 mTorr in air to begin to simulate the Moon or asteroids' pressure) as well as at atmospheric pressure in air (for Earth conditions and comparison). Here in-situ corresponds to distances on the order of 150 mm in contrast to stand-off analysis at distance of many meters. We show the influence of the ambient pressure on the calibration curves prepared from certified soil and clay pellets. In order to detect simultaneously all the elements commonly observed in terrestrial soils, we used an Echelle spectrograph. The results are discussed in terms of calibration curves, measurement precision, plasma light collection system efficiency and matrix effects.
NASA Astrophysics Data System (ADS)
Geiges, A.; Nowak, W.; Rubin, Y.
2013-12-01
Stochastic models of sub-surface systems generally suffer from parametric and conceptual uncertainty. To reduce the model uncertainty, model parameters are calibrated using additional collected data. These data often come from costly data acquisition campaigns that need to be optimized to collect the data with the highest data utility (DU) or value of information. In model-based approaches, the DU is evaluated based on the uncertain model itself and is therefore uncertain as well. Additionally, for non-linear models, data utility depends on the yet unobserved measurement values and can only be estimated as an expected value over an assumed distribution of possible measurement values. Both factors introduce uncertainty into the optimization of field campaigns. We propose and investigate a sequential interaction scheme between campaign optimization, data collection and model calibration. The field campaign is split in individual segments. Each segment consists of optimization, segment-wise data collection, and successive model calibration or data assimilation. By doing so, (1) the expected data utility for the newly collected data is replaced by their actual one, (2) the calibration restricts both conceptual and parametric model uncertainty, and thus (3) the distribution of possible future data values for the subsequent campaign segments also changes. Hence, the model to describe the real system improves successively with each collected data segment, and so does the estimate of the yet remaining data requirements to achieve the overall investigation goals. We will show that using the sequentially improved model for the optimal design (OD) of the remaining field campaign leads to superior and more targeted designs.However, this traditional sequential OD optimizes small data segments one-by-one. In such a strategy, possible mutual dependencies with the possible data values and the optimization of data values collection in later segments are neglected. This allows a
NASA Astrophysics Data System (ADS)
Rong, Youmin; Zhang, Guojun; Huang, Yu
2016-10-01
Inherent strain analysis has been successfully applied to predict welding deformations of large-scale structural components, while thermal-elastic-plastic finite element method is rarely used for its disadvantages of long calculation period and large storage space. In this paper, a hybrid model considering nonlinear yield stress curves and multi-constraint equations to thermal-elastic-plastic analysis is further proposed to predict welding distortions and residual stresses of large-scale structures. For welding T-joint structural steel S355JR by metal active gas welding, the published experiment results of temperature and displacement fields are applied to illustrate the credibility of the proposed integration model. By comparing numerical results of four different cases with the experiment results, it is verified that prediction precision of welding deformations and residual stresses is apparently improved considering the power-law hardening model, and computational time is also obviously shortened about 30.14% using multi-constraint equations. On the whole, the proposed hybrid method can be further used to precisely and efficiently predict welding deformations and residual stresses of large-scale structures.
NASA Technical Reports Server (NTRS)
Hall, P.; Smith, F. T.
1988-01-01
The development of Tollmien-Schlichting waves (TSWs) and Taylor-Goertler vortices (TGVs) in fully developed viscous curved-channel flows is investigated analytically, with a focus on their nonlinear interactions. Two types of interactions are identified, depending on the amplitude of the initial disturbances. In the low-amplitude type, two TSWs and one TGV interact, and the scaled amplitudes go to infinity on a finite time scale; in the higher-amplitude type, which can also occur in a straight channel, the same singularity occurs if the angle between the TSW wavefront and the TGV is greater than 41.6 deg, but the breakdown is exponential and takes an infinite time if the angle is smaller. The implications of these findings for external flow problems such as the design of laminar-flow wings are indicated. It is concluded that longitudinal vortices like those observed in the initial stages of the transition to turbulence can be produced unless the present interaction mechanism is destroyed by boundary-layer growth.
NASA Astrophysics Data System (ADS)
Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.
A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.
NASA Astrophysics Data System (ADS)
Keshavkumar Kamaliya, Parth; Patel, Yashavant Kumar Dashrathlal
2016-01-01
Double arm configuration using parallel manipulator mimic the human arm motions either for planar or spatial space. These configurations are currently lucrative for researchers as it also replaces human workers without major redesign of work-place in industries. Humans' joint ranges limitation of arms can be resolved by replacement of either revolute or spherical joints in manipulator. Hence, the scope of maximum workspace utilization is prevailed. Planar configuration with five revolute joints (5R) is considered to imitate human arm motions in a plane using Double Arm Manipulator (DAM). Position analysis for tool that can be held in end links of configuration is carried out using Pro/mechanism in Creo® as well as SimMechanics. D-H parameters are formulated and its results derived using developed MATLAB programs are compared with mechanism simulation as well as SimMechanics results. Inverse kinematics model is developed for trajectory planning in order to trace tool trajectory in a continuous and smooth sequence. Polynomial functions are derived for position, velocity and acceleration for linear and non-linear curves in joint space. Analytical results obtained for trajectory planning are validated with simulation results of Creo®.
NASA Technical Reports Server (NTRS)
Hall, P.; Smith, F. T.
1988-01-01
The development of Tollmien-Schlichting waves (TSWs) and Taylor-Goertler vortices (TGVs) in fully developed viscous curved-channel flows is investigated analytically, with a focus on their nonlinear interactions. Two types of interactions are identified, depending on the amplitude of the initial disturbances. In the low-amplitude type, two TSWs and one TGV interact, and the scaled amplitudes go to infinity on a finite time scale; in the higher-amplitude type, which can also occur in a straight channel, the same singularity occurs if the angle between the TSW wavefront and the TGV is greater than 41.6 deg, but the breakdown is exponential and takes an infinite time if the angle is smaller. The implications of these findings for external flow problems such as the design of laminar-flow wings are indicated. It is concluded that longitudinal vortices like those observed in the initial stages of the transition to turbulence can be produced unless the present interaction mechanism is destroyed by boundary-layer growth.
Al-Hadyan, Khaled; Elewisy, Sara; Moftah, Belal; Shoukri, Mohamed; Alzahrany, Awad; Alsbeih, Ghazi
2014-12-01
In cases of public or occupational radiation overexposure and eventual radiological accidents, it is important to provide dose assessment, medical triage, diagnoses and treatment to victims. Cytogenetic bio-dosimetry based on scoring of dicentric chromosomal aberrations assay (DCA) is the "gold standard" biotechnology technique for estimating medically relevant radiation doses. Under the auspices of the National Science, Technology and Innovation Plan in Saudi Arabia, we have set up a biodosimetry laboratory and produced a national standard dose-response calibration curve for DCA, pre-required to estimate the doses received. For this, the basic cytogenetic DCA technique needed to be established. Peripheral blood lymphocytes were collected from four healthy volunteers and irradiated with radiation doses between 0 and 5 Gy of 320 keV X-rays. Then, lymphocytes were PHA stimulated, Colcemid division arrested and stained cytogenetic slides were prepared. The Metafer4 system (MetaSystem) was used for automatic and manually assisted metaphase finding and scoring of dicentric chromosomes. Results were fit to the linear-quadratic dose-effect model according to the IAEA EPR-Biodosimetry-2011 report. The resulting manually assisted dose-response calibration curve (Y = 0.0017 + 0.026 × D + 0.081 × D(2)) was in the range of those described in other populations. Although the automated scoring over-and-under estimates DCA at low (<1 Gy) and high (>2 Gy) doses, respectively, it showed potential for use in triage mode to segregate between victims with potential risk to develop acute radiotoxicity syndromes. In conclusion, we have successfully established the first biodosimetry laboratory in the region and have produced a preliminary national dose-response calibration curve. The laboratory can now contribute to the national preparedness plan in response to eventual radiation emergencies in addition to providing information for decision makers and public health
Sahmani, S; Fattahi, A M
2017-08-01
New ceramic materials containing nanoscaled crystalline phases create a main object of scientific interest due to their attractive advantages such as biocompatibility. Zirconia as a transparent glass ceramic is one of the most useful binary oxides in a wide range of applications. In the present study, a new size-dependent plate model is constructed to predict the nonlinear axial instability characteristics of zirconia nanosheets under axial compressive load. To accomplish this end, the nonlocal continuum elasticity of Eringen is incorporated to a refined exponential shear deformation plate theory. A perturbation-based solving process is put to use to derive explicit expressions for nonlocal equilibrium paths of axial-loaded nanosheets. After that, some molecular dynamics (MD) simulations are performed for axial instability response of square zirconia nanosheets with different side lengths, the results of which are matched with those of the developed nonlocal plate model to capture the proper value of nonlocal parameter. It is demonstrated that the calibrated nonlocal plate model with nonlocal parameter equal to 0.37nm has a very good capability to predict the axial instability characteristics of zirconia nanosheets, the accuracy of which is comparable with that of MD simulation. Copyright © 2017 Elsevier Inc. All rights reserved.
Ho, Kwok M
2017-08-31
Area under a receiver-operating-characteristic (AUROC) curve is widely used in medicine to summarize the ability of a continuous predictive marker to predict a binary outcome. This study illustrated how a U-shaped or inverted U-shaped continuous predictor would affect the shape and magnitude of its AUROC curve in predicting a binary outcome by comparing the ROC curves of the worst first 24-hour arterial pH values of 9549 consecutive critically ill patients in predicting hospital mortality before and after centering the predictor by its mean or median. A simulation dataset with an inverted U-shaped predictor was used to assess how this would affect the shape and magnitude of the AUROC curve. An asymmetrical U-shaped relationship between pH and hospital mortality, resulting in an inverse-sigmoidal ROC curve, was observed. The AUROC substantially increased after centering the predictor by its mean (0.611 vs 0.722, difference = 0.111, 95% confidence interval [CI] 0.087-0.135), and was further improved after centering by its median (0.611 vs 0.745, difference = 0.133, 95%CI 0.110-0.157). A sigmoidal-shaped ROC curve was observed for an inverted U-shaped predictor. In summary, a non-linear predictor can result in a biphasic-shaped ROC curve; and centering the predictor can reduce its bias towards null predictive ability.
NASA Astrophysics Data System (ADS)
Goyal, Arti; Mhaskey, Mukul; Gopal-Krishna; Wiita, Paul J.; Stalin, C. S.; Sagar, Ram
2013-09-01
It is important to quantify the underestimation of rms photometric errors returned by the commonly used APPHOT algorithm in the IRAF software, in the context of differential photometry of point-like AGN, because of the crucial role it plays in evaluating their variability properties. Published values of the underestimation factor, η, using several different telescopes, lie in the range 1.3-1.75. The present study aims to revisit this question by employing an exceptionally large data set of 262 differential light curves (DLCs) derived from 262 pairs of non-varying stars monitored under our ARIES AGN monitoring program for characterizing the intra-night optical variability (INOV) of prominent AGN classes. The bulk of these data were taken with the 1-m Sampurnanad Telescope (ST). We find η = 1.54±0.05 which is close to our recently reported value of η = 1.5. Moreover, this consistency holds at least up to a brightness mismatch of 1.5 mag between the paired stars. From this we infer that a magnitude difference of at least up to 1.5 mag between a point-like AGN and comparison star(s) monitored simultaneously is within the same CCD chip acceptable, as it should not lead to spurious claims of INOV.
Ramaley, Louis; Herrera, Lisandra Cubero; Melanson, Jeremy E
2013-06-15
Regioisomeric analysis of triacylglycerols is important in understanding lipid biochemistry and the involvement of lipids in disease and nutrition. The use of calibration plots employing fractional abundances provides a simple and rapid method for such analyses. These plots are believed to be linear, but evidence exists for non-linearity. The behavior of such plots needs to be understood to allow for proper interpretation of regioisomeric data. Solutions of five regioisomer pairs were prepared from pure standards and used to construct calibration plots using triple-stage tandem mass spectrometry (MS(3) ) with electrospray ionization (ESIMS(3) ) and cationization by lithium ions. The data were taken by direct infusion with an AB SCIEX QTRAP 2000 QqLIT mass spectrometer. Non-linear calibration plots were observed for the four isomer pairs containing the polyunsaturated eicosapentaenoic (20:5) and docosahexaenoic (22:6) acids paired with palmitic acid (16:0) or myristic acid (14:0), while the pair including palmitic and stearic (18:0) acids provided a linear plot. A non-linear model was developed for these plots and then verified experimentally. The fractional abundance calibration plots used in regioisomeric analysis of triacylglycerols are intrinsically non-linear, but may appear linear if the scatter in data points obscures the curvature, if the curvature is slight, or if the response factors for the two isomers in the regioisomer pair are similar. Therefore, linearity should not be assumed for these types of measurements until confirmed experimentally. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Hu, Y. J.; Yang, J.; Kitipornchai, S.
2013-07-01
This paper presents a geometrically nonlinear micro-beam model for the electro-dynamic analysis of an initially curved micro-beam under an applied voltage, with an emphasis on its snap-through and pull-in behaviors. The governing equations of motion and the associated boundary conditions are derived in an arc coordinate system without involving any assumptions on the nonlinear deformation. Differential quadrature method (DQM) and Petzold-Gear Backward Differentiation Formulas (BDF) are employed to solve the governing equations in the space and time domains respectively to obtain the nonlinear fundamental frequency, snap-through voltage, pull-in voltage and the corresponding mode shapes of a micro-beam clamped at both ends. The present analysis is validated through a direct comparison with the published experimental and numerical results. A parametric study is conducted to investigate the influences of the initial gap, base length, arc rise, and initial curved configuration on the snap-through and pull-in behaviors of the micro-beam.
Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene
2011-10-28
This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.
Accounting For Nonlinearity In A Microwave Radiometer
NASA Technical Reports Server (NTRS)
Stelzried, Charles T.
1991-01-01
Simple mathematical technique found to account adequately for nonlinear component of response of microwave radiometer. Five prescribed temperatures measured to obtain quadratic calibration curve. Temperature assumed to vary quadratically with reading. Concept not limited to radiometric application; applicable to other measuring systems in which relationships between quantities to be determined and readings of instruments differ slightly from linearity.
NASA Technical Reports Server (NTRS)
Ng, C. F.
1988-01-01
Assuming a single-mode transverse displacement, a simple formula is derived for the transverse load-displacement relationship of a plate under in-plane compression. The formula is used to derive a simple analytical expression for the nonlinear dynamic response of postbuckled plates under sinusoidal or random excitation. The highly nonlinear motion of snap-through can be easily interpreted using the single-mode formula. Experimental results are obtained using buckled and cylindrical aluminum panels using discrete frequency and broadband excitation of mechanical and acoustic forces.
Stadtmann, H; Delgado, A; Gómez-Ros, J M
2002-01-01
This paper reports on the results of a heating profile analysis using a commercial routine read-out system with non-contact hot nitrogen heating, using linear heating gas profiles. Glow curves of TLD-100 were analysed for different linear heating gas rates from 1 degree C x s(-1) to 30 degrees C x s(-1). The analysis of the individual peak maxima (Peak 2-5) leads to an approximation of the real heating profile in the TL detector. It was found that the real heating profile deviates strongly from linearity, and that the temperature lag between the heating gas and the detector reaches values up to some tens of degrees C. The consequences of this non-linearity, with respect to the resulting glow curves, are discussed in this paper. These results lead to a better understanding of the shape of routine TL glow curves and help to improve the use of glow curves analysis in routine services. In addition, a simple procedure is described which allows calculation of the real heating profile based on the heating gas temperature profile. This model shows a very good match between experimental data and calculated values.
NASA Astrophysics Data System (ADS)
Hayasaki, Yoshio
2015-10-01
Some methods for decreasing a measurement error derived from a phase-shifting error for broadband light in phase-shifting low-coherence digital holography are proposed based on theoretical analysis and numerical calculations. It is well-known that an achromatic-phase shifter based on a rotating polarizer drastically decreases the error, but it is found that a small error remains according to the imperfection of the achromatic-phase shifter. It is also found that an ideal achromatic-phase shifter perfectly eliminates the error only when the light source has a symmetrical spectrum. Furthermore, it is demonstrated that a simple linear calibration method decreases the error in a narrow range of optical path differences if a light source with an asymmetrical spectrum is used. Finally, a nonlinear calibration method that can further decrease the error in a wide range of optical path differences is discussed.
Saltzman, M. R.; Edwards, C. T.; Leslie, S. A.; Dwyer, Gary S.; Bauer, J. A.; Repetski, John E.; Harris, A. G.; Bergstrom, S. M.
2014-01-01
The Ordovician 87Sr/86Sr isotope seawater curve is well established and shows a decreasing trend until the mid-Katian. However, uncertainties in calibration of this curve to biostratigraphy and geochronology have made it difficult to determine how the rates of 87Sr/86Sr decrease may have varied, which has implications for both the stratigraphic resolution possible using Sr isotope stratigraphy and efforts to model the effects of Ordovician geologic events. We measured 87Sr/86Sr in conodont apatite in North American Ordovician sections that are well studied for conodont biostratigraphy, primarily in Nevada, Oklahoma, the Appalachian region, and Ohio Valley. Our results indicate that conodont apatite may provide an accurate medium for Sr isotope stratigraphy and strengthen previous reports that point toward a significant increase in the rate of fall in seawater 87Sr/86Sr during the Middle Ordovician Darriwilian Stage. Our 87Sr/86Sr results suggest that Sr isotope stratigraphy will be most useful as a high-resolution tool for global correlation in the mid-Darriwilian to mid-Sandbian, when the maximum rate of fall in 87Sr/86Sr is estimated at ∼5.0–10.0 × 10–5 per m.y. Variable preservation of conodont elements limits the precision for individual stratigraphic horizons. Replicate conodont analyses from the same sample differ by an average of ∼4.0 × 10–5 (the 2σ standard deviation is 6.2 × 10–5), which in the best case scenario allows for subdivision of Ordovician time intervals characterized by the highest rates of fall in 87Sr/86Sr at a maximum resolution of ∼0.5–1.0 m.y. Links between the increased rate of fall in 87Sr/86Sr beginning in the mid-late Darriwilian (Phragmodus polonicus to Pygodus serra conodont zones) and geologic events continue to be investigated, but the coincidence with a long-term rise in sea level (Sauk-Tippecanoe megasequence boundary) and tectonic events (Taconic orogeny) in North America provides a plausible
De Vita, C.; Brun, J.; Reynard-Carette, C.; Carette, M.; Amharrak, H.; Lyoussi, A.; Fourmentel, D.; Villard, J.F.
2015-07-01
calorimeter cell head. This discrepancy is higher than in previous experiments because the calorimeter owns a high sensitivity. Consequently, a new prototype was created and instrumented by other heat sources in order to impose an energy deposition on the calorimetric cell structure (in particular in the base) and to improve the calibration step in out-of-pile conditions. In this paper, on the first part a detailed description of the new calorimetric sensor will be given. On the second part, the experimental response of the sensor obtained for several internal heating conditions will be shown. The influence of these conditions on the calibration curve will be discussed. Then the response of this prototype will be also presented for different external cooling fluid conditions (in particular flow temperature). In this part, the comparison between the in-pile and out-of-pile experimental results will be performed. On the last part, these out-of-pile experiments will be completed by 2D axisymmetrical thermal simulations with the CEA code CAST3M using Finite Elements Method. After a comparison between experimental and numerical works, improvements of the sensor prototype will be studied (new heat sources). (authors)
NASA Technical Reports Server (NTRS)
Liebowitz, H.; Jones, D. L.; Poulose, P. K.
1974-01-01
Because of the current high degree of interest in the development of a standard nonlinear test method, analytical and experimental comparisons have been made between the R-curve, COD, J-integral and nonlinear energy methods. A general definition of fracture toughness is proposed and the fundamental definitions of each method are compared to it. Experimental comparisons between the COD, J-integral, nonlinear energy and standard ASTM methods have been made for a series of compact tension tests on several aluminum alloys. Some of the tests were conducted according to the ASTM standard method E399-72, while the specimen thickness was reduced below the minimum requirement for plane strain fracture toughness testing for several other test series. The fracture toughness values obtained by the COD method were significantly higher than the toughness values obtained by the other three methods. All of the methods displayed a tendency to yield higher toughness values as the thickness was decreased below the ASTM plane strain requirement.
NASA Technical Reports Server (NTRS)
Liebowitz, H.; Jones, D. L.; Poulose, P. K.
1974-01-01
Because of the current high degree of interest in the development of a standard nonlinear test method, analytical and experimental comparisons have been made between the R-curve, COD, J-integral and nonlinear energy methods. A general definition of fracture toughness is proposed and the fundamental definitions of each method are compared to it. Experimental comparisons between the COD, J-integral, nonlinear energy and standard ASTM methods have been made for a series of compact tension tests on several aluminum alloys. Some of the tests were conducted according to the ASTM standard method E399-72, while the specimen thickness was reduced below the minimum requirement for plane strain fracture toughness testing for several other test series. The fracture toughness values obtained by the COD method were significantly higher than the toughness values obtained by the other three methods. All of the methods displayed a tendency to yield higher toughness values as the thickness was decreased below the ASTM plane strain requirement.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
Evaluation of B/A nonlinear parameter using an acoustic self-calibrated pulse-echo method
Vander Meulen, F.; Haumesser, L.
2008-05-26
The objective of this work is to develop an easy-to-build and robust setup for measuring the nonlinearity parameter B/A in fluids using ultrasound. The method is based on the pulse-echo technique, using a single element broadband acoustic transducer, and requires electrical signal measurements. Results obtained in water and denatured alcohol validate the proposed procedure. The choice of a suitable primary wave frequency is discussed with regard to the transducer sensitivity. Further, the influence of the perturbations introduced by the experimental device nonlinearities, and the role of the reflector on the measured second harmonic field amplitude are investigated.
Vereecken, H; Jaekel, U; Schwarze, H
2002-06-01
We analyzed the long-term behavior of breakthrough curves (BTCs) and temporal moments of a solute subjected to Freundlich equilibrium sorption (s = kc(n)). For one-dimensional transport in a homogeneous porous medium, we derived a power-law relation between travel time, tau, and solute displacement, chi, with the exponent being equal to the Freundlich n exponent. The mean solute velocity, derived from the first time moment, was found to change as tau(n-1). For n values larger than 0.66, the second time moment could be related to c chi(2/n), where c is a constant. An approach based on the use of a critical concentration was developed to estimate the presence of the asymptotic regime in the tail of the BTC. This approach was tested successfully using numerical case studies. One-dimensional numerical simulations with varying values of k, n and initial mass were run to verify the closed form analytical expressions for the large time behavior of temporal moments and the tailing part of breakthrough curves. Good agreement between the slope of the tailing part of log-log transformed BTCs and the predicted slope using asymptotic theory was found. Asymptotic theory in general underestimated the magnitude of the concentration in the tail. The quality of the estimated concentrations in the tail improved for small values of the dispersivity. Experimental BTCs of uranin and benazolin were analyzed in combination with sorption/desorption batch experiments using asymptotic theory. A good agreement between the value of n parameter derived from desorption experiment with benazolin and the value of the n parameter derived from the tail of the BTC was found.
NASA Astrophysics Data System (ADS)
Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.
2016-12-01
New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective
Schenone, Agustina V; Culzoni, María J; Marsili, Nilda R; Goicoechea, Héctor C
2013-06-01
The performance of MCR-ALS was studied in the modeling of non-linear kinetic-spectrophotometric data acquired by a stopped-flow system for the quantitation of tartrazine in the presence of brilliant blue and sunset yellow FCF as possible interferents. In the present work, MCR-ALS and U-PCA/RBL were firstly applied to remove the contribution of unexpected components not included in the calibration set. Secondly, a polynomial function was used to model the non-linear data obtained by the implementation of the algorithms. MCR-ALS was the only strategy that allowed the determination of tartrazine in test samples accurately. Therefore, it was applied for the analysis of tartrazine in beverage samples with minimum sample preparation and short analysis time. The proposed method was validated by comparison with a chromatographic procedure published in the literature. Mean recovery values between 98% and 100% and relative errors of prediction values between 4% and 9% were indicative of the good performance of the method. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheatham, Michael M.; Sangrey, William F.; White, William M.
1993-02-01
The primary factors limiting accuracy and precision using inductively coupled plasma mass spectrometry (ICP-MS) in matrix-matched external standardization are machine drift and variation of the instrument response as a function of mass. Because drift is usually non-linear, the degree of drift differs from one mass to the next, and the direction of drift can change frequently when analyzing over large mass ranges. Internal standardization results in minimal improvement of data quality. An analytical procedure and an off-line data reduction algorithm have been developed that correct for these variations and produce a significant improvement in analytical accuracy and precision. In this technique, a "drift correction" standard is analyzed after every four or five samples. A polynomial curve is fitted to each isotope analyzed, and a correction based on this curve is applied to the measured intensity of the respective isotopes in the samples and standards. This data reduction algorithm has been developed into a Microsoft Excel ™ 3.0 Macro that completely automates all calculations. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hard copy text is accompanied by a disk with the Excel macro for the Macintosh computer and sufficient instructions for its use. The main article discusses the scientific aspects of the subject and explains the purpose of the macro.
Calibration of pneumotachographs using a calibrated syringe.
Tang, Yongquan; Turner, Martin J; Yem, Johnny S; Baker, A Barry
2003-08-01
Pneumotachograph require frequent calibration. Constant-flow methods allow polynomial calibration curves to be derived but are time consuming. The iterative syringe stroke technique is moderately efficient but results in discontinuous conductance arrays. This study investigated the derivation of first-, second-, and third-order polynomial calibration curves from 6 to 50 strokes of a calibration syringe. We used multiple linear regression to derive first-, second-, and third-order polynomial coefficients from two sets of 6-50 syringe strokes. In part A, peak flows did not exceed the specified linear range of the pneumotachograph, whereas flows in part B peaked at 160% of the maximum linear range. Conductance arrays were derived from the same data sets by using a published algorithm. Volume errors of the calibration strokes and of separate sets of 70 validation strokes (part A) and 140 validation strokes (part B) were calculated by using the polynomials and conductance arrays. Second- and third-order polynomials derived from 10 calibration strokes achieved volume variability equal to or better than conductance arrays derived from 50 strokes. We found that evaluation of conductance arrays using the calibration syringe strokes yields falsely low volume variances. We conclude that accurate polynomial curves can be derived from as few as 10 syringe strokes, and the new polynomial calibration method is substantially more time efficient than previously published conductance methods.
NASA Technical Reports Server (NTRS)
Robertson, G.
1982-01-01
Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.
Gottlieb, O.; Feldman, M.
1996-12-31
The authors combine an averaging procedure with a Hilbert transform based algorithm for parameter estimation of a nonlinear ocean system roll model. System backbone curves obtained from data are compared to those obtained analytically and are found to be accurate. Sensitivity of the results is tested by introducing random noise to a nonlinear model describing roll response of a small boat. An example field calibration test of a small semisubmersible exhibiting nonlinear damping is also considered.
Quadrature phase interferometer used to calibrate dial indicator calibrators
NASA Astrophysics Data System (ADS)
Huang, Shau-Chi; Liou, Huay-Chung; Peng, Gwo-Sheng; Lu, Ming-Feng
2001-10-01
To calibrate dial indicators, gage blocks or dial indicator calibrators are usually used. For better accuracy and resolution, interferometers are used to calibrate dial indicator calibrators. Systematic errors of laser interferometers can be classified into three categories of intrinsic errors, environment errors and installation errors. Intrinsic errors include laser wavelength error, electronic error and optics nonlinearity. In order to achieve nanometer accuracy, minimizing intrinsic error is crucial. In this paper, we will address the problems of minimizing the optics nonlinearity error and describe the discrete-time signal processing method to minimize the electronic error, nonlinearity error and drift by simply using quadrature phase interferometer for nanometer accuracy and linearity.
NASA Technical Reports Server (NTRS)
Mccabe, D. E.; Sha, G. T.
1977-01-01
The compliance calibrations for the compact (CS) and crack-line-wedge-loaded (CLWL) specimens have been determined by experimental measurements and by boundary-collocation analysis. The CS and CLWL specimen configurations were modeled more accurately than those used in previous analytical investigations. Polynomial expressions for the compliance at various stations along the crack line for CS and CLWL specimens are presented. The compliance calibrations for the center-crack tension (CCT) specimen have been determined theoretically by boundary-collocation and finite-element analysis. The calculated compliance values for the CCT specimen are compared with values obtained from the Irwin-Westergaard expression and from a modification to the Irwin-Westergaard expression proposed by Eftis and Liebowitz. The Eftis-Liebowitz expression was found to be in good agreement (plus or minus 2 percent) with both analyses for crack aspect ratios up to 0.8 and for gage half-span to specimen width ratios up to 0.5.
Calibration system of underwater robot sensor based on CID algorithm
NASA Astrophysics Data System (ADS)
Wang, Xiaolong; Wang, Sen; Gao, Lifu; Wu, Shan; Wei, Shuheng
2017-06-01
In the calibration of static characteristic of the sensor, the original measured data are usually a nonlinear distribution. Based on this situation, underwater robot sensor static calibration system is designed. The system consists of four parts: a sensor, I-V conversion with amplifying circuit, microcontroller STM32F107 and a PC. The lower computer and the upper computer communicate by USB. A kind of adaptive cyclic iterative denoising (CID) algorithm is presented for data processing. Finally the curve will be fitted with compensation processing.
Effect of calibration method on Tekscan sensor accuracy.
Brimacombe, Jill M; Wilson, David R; Hodgson, Antony J; Ho, Karen C T; Anglin, Carolyn
2009-03-01
Tekscan pressure sensors are used in biomechanics research to measure joint contact loads. While the overall accuracy of these sensors has been reported previously, the effects of different calibration algorithms on sensor accuracy have not been compared. The objectives of this validation study were to determine the most appropriate calibration method supplied in the Tekscan program software and to compare its accuracy to the accuracy obtained with two user-defined calibration protocols. We evaluated the calibration accuracies for test loads within the low range, high range, and full range of the sensor. Our experimental setup used materials representing those found in standard prosthetic joints, i.e., metal against plastic. The Tekscan power calibration was the most accurate of the algorithms provided with the system software, with an overall rms error of 2.7% of the tested sensor range, whereas the linear calibrations resulted in an overall rms error of up to 24% of the tested range. The user-defined ten-point cubic calibration was almost five times more accurate, on average, than the power calibration over the full range, with an overall rms error of 0.6% of the tested range. The user-defined three-point quadratic calibration was almost twice as accurate as the Tekscan power calibration, but was sensitive to the calibration loads used. We recommend that investigators design their own calibration curves not only to improve accuracy but also to understand the range(s) of highest error and to choose the optimal points within the expected sensing range for calibration. Since output and sensor nonlinearity depend on the experimental protocol (sensor type, interface shape and materials, sensor range in use, loading method, etc.), sensor behavior should be investigated for each different application.
NASA Astrophysics Data System (ADS)
Lazarev, Vladimir A.; Leonov, Stanislav O.; Tarabrin, Mikhail K.; Karasik, Valerii E.
2017-06-01
Fiber Bragg grating (FBG) strain sensors are powerful tools for structural health monitoring applications. However, FBG sensor fabrication and packaging processes can lead to a non-linear behavior, that affects the accuracy of the strain measurements. Here we present a novel nondestructive calibration technique for FBG strain sensors that use a mechanical nanomotion transducer. A customized calibration setup was designed based on dovetail-type slideways that were mechanized using a stepping motor. The performance of the FBG strain sensor was investigated through analysis of experimental data, and the calibration curves for the FBG strain sensor are presented.
NASA Astrophysics Data System (ADS)
Sigismondi, Costantino
2008-09-01
Stellar aberration is the largest special relativistic effect discovered in astronomy (in 1727 by James Bradley), involving the speed of light when composed with Earth orbital motion. This effect with nutation affected the measurement of latitude with Polaris uppper and lower transits in the first week of January, 1701 made by Francesco Bianchini (1662-1729). Equinoxes and Solstices of 1703 were measured by timing solar and stellar transits at the Meridian Line of Pope Clement XI built in the Basilica of S. Maria degli Angeli in Rome. Original Eastward 4' 28.8" ± 0.6" deviation of the Line affects all measurements. The calibration curve of Clementine Line -here firstly published after 2 years of measurements- includes also local deviations of the Line, and it is used to correct solar and lunar ephemerides at 0.3 s level of accuracy, when meridian transits are there observed and timed.
NASA Astrophysics Data System (ADS)
Ahn, C. H.; Park, H. W.; Kim, H. H.; Park, S. H.; Son, C.; Kim, M. C.; Lee, J. H.; Go, J. S.
2013-06-01
High efficiency heat exchangers, such as intercoolers and recuperators, are composed of complex and compact structures to enhance heat transfer. This limits the installation of conventional temperature sensors to measure the temperature inside the heat exchanger without flow disturbance. To overcome this limitation, we have developed a direct patterning method in which metal is sputtered onto a curved surface using film photoresist and the fabrication of thin film Au resistance temperature detection (RTD) temperature sensors. A photosensitive film resist has been used to overcome the difficulty of 3-dimensional photolithography on a curved surface. The film resist after 2-dimensional photolithography is laminated over an alumina rod which is deposited with Au as an RTD sensing material. The Au metal is etched chemically, and the film resist is removed to form the thin film Au-RTD temperature sensors. They are calibrated by measuring the resistance change against temperature in a thermally controlled furnace. The second order polynomial fit shows good agreement with the measured temperatures with a standard deviation of 0.02 for the temperature range of 20-450 °C. Finally, the performance of the Au-RTD temperature sensors was evaluated.
Chen, Yanfei; He, Jin; Zhang, Jibin; Yu, Ziniu
2009-08-15
A radial basis function neural network (RBFNN) method was developed for the first time to model the nonlinear calibration curves of four hexachlorocyclohexane (HCH) isomers, aiming to extend their working calibration ranges in gas chromatography-electron capture detector (GC-ECD). Other 14 methods, including seven parametric curve fitting methods, two nonparametric curve fitting methods, and five other artificial neural network (ANN) methods, were also developed and compared. Only the RBFNN method, with logarithm-transform and normalization operation on the calibration data, was able to model the nonlinear calibration curves of the four HCH isomers adequately. The RBFNN method accurately predicted the concentrations of HCH isomers within and out of the linear ranges in certified test samples. Furthermore, no significant difference (p>0.05) was found between the results of HCH isomers concentrations in water samples calculated with RBFNN method and ordinary least squares (OLS) method (R(2)>0.9990). Conclusively, the working calibration ranges of the four HCH isomers were extended from 0.08-60 ng/ml to 0.08-1000 ng/ml without sacrificing accuracy and precision by means of RBFNN. The outstanding nonlinear modeling capability of RBFNN, along with its universal applicability to various problems as a "soft" modeling method, should make the method an appealing alternative to traditional modeling methods in the calibration analyses of various systems besides the GC-ECD.
Barman, Ishan; Kong, Chae-Ryon; Dingari, Narahara Chari; Dasari, Ramachandra R.; Feld, Michael S.
2010-01-01
Sample-to-sample variability has proven to be a major challenge in achieving calibration transfer in quantitative biological Raman spectroscopy. Multiple morphological and optical parameters, such as tissue absorption and scattering, physiological glucose dynamics and skin heterogeneity, vary significantly in a human population introducing non-analyte specific features into the calibration model. In this paper, we show that fluctuations of such parameters in human subjects introduce curved (non-linear) effects in the relationship between the concentrations of the analyte of interest and the mixture Raman spectra. To account for these curved effects, we propose the use of support vector machines (SVM) as a non-linear regression method over conventional linear regression techniques such as partial least squares (PLS). Using transcutaneous blood glucose detection as an example, we demonstrate that application of SVM enables a significant improvement (at least 30%) in cross-validation accuracy over PLS when measurements from multiple human volunteers are employed in the calibration set. Furthermore, using physical tissue models with randomized analyte concentrations and varying turbidities, we show that the fluctuations in turbidity alone causes curved effects which can only be adequately modeled using non-linear regression techniques. The enhanced levels of accuracy obtained with the SVM based calibration models opens up avenues for prospective prediction in humans and thus for clinical translation of the technology. PMID:21050004
Comparison of different linear calibration approaches for LC-MS bioanalysis.
Tan, Aimin; Awaiye, Kayode; Jose, Besy; Joshi, Paresh; Trabelsi, Fethi
2012-12-12
Many different calibration approaches are used for linear calibration in LC-MS bioanalysis, such as different numbers of concentration levels and replicates. However, direct comparison of these approaches is rare, particularly using experimental results. The purpose of this research is to compare different linear calibration approaches (existing and new ones) through simulations and experiments. Both simulation and experimental results demonstrate that linear calibration using two concentrations (two true concentrations, not forced through zero) is as good as or even better than that using multiple concentrations (e.g. 8 or 10) in terms of accuracy. Additionally, two-concentration calibration not only significantly saves time and cost, but is also more robust. Furthermore, it has been demonstrated that the extrapolation of a linear curve at the high concentration end to a linearity-known region is acceptable. When multi-concentration calibration is used, the difference between the two commonly used approaches, i.e. singlet (one curve) or duplicate (two curves) standards per concentration level is small when a method is very precise. Otherwise, one curve approach can result in larger variation at the low concentration end and higher batch failure rate. To reduce the variation and unnecessary reassays due to batch failure or possible rejection of the lowest and/or highest calibration standards, a partially duplicate-standard approach is proposed, which has duplicate-standard-like performance but still saves time and cost as singlet-standard approach does. Finally, the maximum allowable degrees of quadratic (non-linear) response in linear calibration are determined for different scenarios. Because of its multiple advantages and potential application in regulated bioanalysis, recommendations as how to implement two-concentration linear calibration in practice are given and some typical "concerns" regarding linear calibration using only two concentrations are addressed
Reimer, P J; Baillie, M L; Bard, E; Beck, J W; Blackwell, P G; Buck, C E; Burr, G S; Edwards, R L; Friedrich, M; Guilderson, T P; Hogg, A G; Hughen, K A; Kromer, B; McCormac, G; Manning, S; Reimer, R W; Southon, J R; Stuiver, M; der Plicht, J v; Weyhenmeyer, C E
2005-10-02
Radiocarbon calibration curves are essential for converting radiocarbon dated chronologies to the calendar timescale. Prior to the 1980's numerous differently derived calibration curves based on radiocarbon ages of known age material were in use, resulting in ''apples and oranges'' comparisons between various records (Klein et al., 1982), further complicated by until then unappreciated inter-laboratory variations (International Study Group, 1982). The solution was to produce an internationally-agreed calibration curve based on carefully screened data with updates at 4-6 year intervals (Klein et al., 1982; Stuiver and Reimer, 1986; Stuiver and Reimer, 1993; Stuiver et al., 1998). The IntCal working group has continued this tradition with the active participation of researchers who produced the records that were considered for incorporation into the current, internationally-ratified calibration curves, IntCal04, SHCal04, and Marine04, for Northern Hemisphere terrestrial, Southern Hemisphere terrestrial, and marine samples, respectively (Reimer et al., 2004; Hughen et al., 2004; McCormac et al., 2004). Fairbanks et al. (2005), accompanied by a more technical paper, Chiu et al. (2005), and an introductory comment, Adkins (2005), recently published a ''calibration curve spanning 0-50,000 years''. Fairbanks et al. (2005) and Chiu et al. (2005) have made a significant contribution to the database on which the IntCal04 and Marine04 calibration curves are based. These authors have now taken the further step to derive their own radiocarbon calibration extending to 50,000 cal BP, which they claim is superior to that generated by the IntCal working group. In their papers, these authors are strongly critical of the IntCal calibration efforts for what they claim to be inadequate screening and sample pretreatment methods. While these criticisms may ultimately be helpful in identifying a better set of protocols, we feel that there are also several erroneous and misleading
Tauler, R
2007-07-09
Although alternating least squares algorithms have revealed extremely useful and flexible to solve multivariate curve resolution problems, other approaches based on non-linear optimization algorithms using non-linear constraints are possible. Once the subspaces defined by PCA solutions are identified, appropriate rotation and perturbation of these solutions can produce solutions fulfilling the constraints obeyed by the physical nature of the investigated systems. In order to perform such a rotation, an optimization algorithm based in the fulfillment of constraints and some examples of application in chemistry and environmental chemistry are given. It is shown that the solutions obtained either by alternating least squares or by the new proposed algorithm are rather similar and that they are both within the boundaries of the band of feasible solutions obtained by an algorithm previously developed to estimate them.
Weninger, Bernhard; Jöris, Olaf
2008-11-01
This paper combines the data sets available today for 14C-age calibration of the last 60 ka. By stepwise synchronization of paleoclimate signatures, each of these sets of 14C-ages is compared with the U/Th-dated Chinese Hulu Cave speleothem records, which shows global paleoclimate change in high temporal resolution. By this synchronization we have established an absolute-dated Greenland-Hulu chronological framework, against which global paleoclimate data can be referenced, extending the 14C-age calibration curve back to the limits of the radiocarbon method. Based on this new, U/Th-based Greenland(Hulu) chronology, we confirm that the radiocarbon timescale underestimates calendar ages by several thousand years during most of Oxygen Isotope Stage 3. Major atmospheric 14C variations are observed for the period of the Middle to Upper Paleolithic transition, which has significant implications for dating the demise of the last Neandertals. The early part of "the transition" (with 14C ages > 35.0 ka 14C BP) coincides with the Laschamp geomagnetic excursion. This period is characterized by highly-elevated atmospheric 14C levels. The following period ca. 35.0-32.5 ka 14C BP shows a series of distinct large-scale 14C age inversions and extended plateaus. In consequence, individual archaeological 14C dates older than 35.0 ka 14C BP can be age-calibrated with relatively high precision, while individual dates in the interval 35.0-32.5 ka 14C BP are subject to large systematic age-'distortions,' and chronologies based on large data sets will show apparent age-overlaps of up to ca. 5,000 cal years. Nevertheless, the observed variations in past 14C levels are not as extreme as previously proposed ("Middle to Upper Paleolithic dating anomaly"), and the new chronological framework leaves ample room for application of radiocarbon dating in the age-range 45.0-25.0 ka 14C BP at high temporal resolution.
A Load Shortening Curve Library for Longitudinally Stiffened Panels
2010-08-01
curves. These are compared with similar curves calculated using nonlinear FEA and using design formulas published by the International Association...of parameters values used in load-shortening curve libraries ........................... 7 Table 3: Nonlinear solution strategy...direct assessment with nonlinear finite element analysis (FEA) first reported by Chen et al. [9]. A recent study comparing the ultimate strengths of
BATSE spectroscopy detector calibration
NASA Technical Reports Server (NTRS)
Band, D.; Ford, L.; Matteson, J.; Lestrade, J. P.; Teegarden, B.; Schaefer, B.; Cline, T.; Briggs, M.; Paciesas, W.; Pendleton, G.
1992-01-01
We describe the channel-to-energy calibration of the Spectroscopy Detectors of the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory (GRO). These detectors consist of NaI(TI) crystals viewed by photomultiplier tubes whose output in turn is measured by a pulse height analyzer. The calibration of these detectors has been complicated by frequent gain changes and by nonlinearities specific to the BATSE detectors. Nonlinearities in the light output from the NaI crystal and in the pulse height analyzer are shifted relative to each other by changes in the gain of the photomultiplier tube. We present the analytical model which is the basis of our calibration methodology, and outline how the empirical coefficients in this approach were determined. We also describe the complications peculiar to the Spectroscopy Detectors, and how our understanding of the detectors' operation led us to a solution to these problems.
Teglia, Carla M; Cámara, María S; Vera-Candioti, Luciana
2017-02-08
In the previously published part of this study, we detailed a novel strategy based on dispersive liquid-liquid microextraction to extract and preconcentrate nine fluoroquinolones in porcine blood. Moreover, we presented the optimized experimental conditions to obtain complete CE separation between target analytes. Consequently, this second part reports the validation of the developed method to determine flumenique, difloxacin, enrofloxacin, marbofloxacin, ofloxacin, ciprofloxacin, through univariate calibration, and enoxacin, danofloxacin, and gatifloxacin through multivariate curve resolution analysis. The validation was performed according to FDA guidelines for bioanalytical assay procedures and the European Directive 2002/657 to demonstrate that the results are reliable. The method was applied for the determination of fluoroquinolones in real samples. Results indicated a high selectivity and excellent precision characteristics, with RSD less than 11.9% in the concentrations, in intra- and interassay precision studies. Linearity was proved for a range from 4.00 to 30.00 mg/L and the recovery has been investigated at four different fortification levels, from 89 to 113%. Several approaches found in the literature were used to determinate the LODs and LOQs. Though all strategies used were appropriate, we obtained different values when using different methods. Estimating the S/N ratio with the mean noise level in the migration time of each fluoroquinolones turned out as the best studied method for evaluating the LODs and LOQs, and the values were in a range of 1.55 to 4.55 mg/L and 5.17 to 9.62 mg/L, respectively.
NASA Astrophysics Data System (ADS)
Charrier, Jessica G.; McFall, Alexander S.; Vu, Kennedy K.-T.; Baroi, James; Olea, Catalina; Hasson, Alam; Anastasio, Cort
2016-11-01
The dithiothreitol (DTT) assay is widely used to measure the oxidative potential of particulate matter. Results are typically presented in mass-normalized units (e.g., pmols DTT lost per minute per microgram PM) to allow for comparison among samples. Use of this unit assumes that the mass-normalized DTT response is constant and independent of the mass concentration of PM added to the DTT assay. However, based on previous work that identified non-linear DTT responses for copper and manganese, this basic assumption (that the mass-normalized DTT response is independent of the concentration of PM added to the assay) should not be true for samples where Cu and Mn contribute significantly to the DTT signal. To test this we measured the DTT response at multiple PM concentrations for eight ambient particulate samples collected at two locations in California. The results confirm that for samples with significant contributions from Cu and Mn, the mass-normalized DTT response can strongly depend on the concentration of PM added to the assay, varying by up to an order of magnitude for PM concentrations between 2 and 34 μg mL-1. This mass dependence confounds useful interpretation of DTT assay data in samples with significant contributions from Cu and Mn, requiring additional quality control steps to check for this bias. To minimize this problem, we discuss two methods to correct the mass-normalized DTT result and we apply those methods to our samples. We find that it is possible to correct the mass-normalized DTT result, although the correction methods have some drawbacks and add uncertainty to DTT analyses. More broadly, other DTT-active species might also have non-linear concentration-responses in the assay and cause a bias. In addition, the same problem of Cu- and Mn-mediated bias in mass-normalized DTT results might affect other measures of acellular redox activity in PM and needs to be addressed.
Winman, Anders; Juslin, Peter; Lindskog, Marcus; Nilsson, Håkan; Kerimi, Neda
2014-01-01
The purpose of the study was to investigate how numeracy and acuity of the approximate number system (ANS) relate to the calibration and coherence of probability judgments. Based on the literature on number cognition, a first hypothesis was that those with lower numeracy would maintain a less linear use of the probability scale, contributing to overconfidence and nonlinear calibration curves. A second hypothesis was that also poorer acuity of the ANS would be associated with overconfidence and non-linearity. A third hypothesis, in line with dual-systems theory (e.g., Kahneman and Frederick, 2002) was that people higher in numeracy should have better access to the normative probability rules, allowing them to decrease the rate of conjunction fallacies. Data from 213 participants sampled from the Swedish population showed that: (i) in line with the first hypothesis, overconfidence and the linearity of the calibration curves were related to numeracy, where people higher in numeracy were well calibrated with zero overconfidence. (ii) ANS was not associated with overconfidence and non-linearity, disconfirming the second hypothesis. (iii) The rate of conjunction fallacies was slightly, but to a statistically significant degree decreased by numeracy, but still high at all numeracy levels. An unexpected finding was that participants with better ANS acuity gave more realistic estimates of their performance relative to others. PMID:25140163
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to
ERIC Educational Resources Information Center
Rousseau, Ronald
1994-01-01
Discussion of informetric distributions shows that generalized Leimkuhler functions give proper fits to a large variety of Bradford curves, including those exhibiting a Groos droop or a rising tail. The Kolmogorov-Smirnov test is used to test goodness of fit, and least-square fits are compared with Egghe's method. (Contains 53 references.) (LRW)
NASA Astrophysics Data System (ADS)
Rantakyrö, Fredrik T.
2017-09-01
"The Gemini Planet Imager requires a large set of Calibrations. These can be split into two major sets, one set associated with each observation and one set related to biweekly calibrations. The observation set is to optimize the correction of miscroshifts in the IFU spectra and the latter set is for correction of detector and instrument cosmetics."
Psychophysical contrast calibration
To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli
2013-01-01
Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843
NASA Astrophysics Data System (ADS)
Hulbert, S.; Hodge, P.; Lindler, D.; Shaw, R.; Goudfrooij, P.; Katsanis, R.; Keener, S.; McGrath, M.; Bohlin, R.; Baum, S.
1997-05-01
Routine calibration of STIS observations in the HST data pipeline is performed by the CALSTIS task. CALSTIS can: subtract the over-scan region and a bias image from CCD observations; remove cosmic ray features from CCD observations; correct global nonlinearities for MAMA observations; subtract a dark image; and, apply flat field corrections. In the case of spectral data, CALSTIS can also: assign a wavelength to each pixel; apply a heliocentric correction to the wavelengths; convert counts to absolute flux; process the automatically generated spectral calibration lamp observations to improve the wavelength solution; rectify two-dimensional (longslit) spectra; subtract interorder and sky background; and, extract one-dimensional spectra. CALSTIS differs in significant ways from the current HST calibration tasks. The new code is written in ANSI C and makes use of a new C interface to IRAF. The input data, reference data, and output calibrated data are all in FITS format, using IMAGE or BINTABLE extensions. Error estimates are computed and include contributions from the reference images. The entire calibration can be performed by one task, but many steps can also be performed individually.
Burrows, John
2013-04-01
Calibration curves for use in chromatographic bioassays are normally constructed using least squares regression analysis. However, the effects of changes in the number and distribution of calibrators, together with the choice between linear and nonlinear regressions and their associated weighting factors, are difficult to quantify. A Monte Carlo simulation software package is under development that uses the assay range, concentration versus response relationship and intra-assay measurement precision profile to quantify the errors resulting from different calibration procedures. Criteria for assay batch acceptance, in terms of calibrator and QC sample estimates being within specified limits, are included in the design. Monte Carlo software provides a means to evaluate calibration strategies and maximize assay batches meeting acceptance criteria.
Prompt energy calibration at RENO
NASA Astrophysics Data System (ADS)
KIM, Sang Yong; RENO Collaboration
2017-09-01
RENO (Reactor Experiment for Neutrino Oscillation) has obtained the first measured value of effective neutrino mass difference from a spectral analysis of reactor neutrino disappearance. The measurement absolutely relies on the accurate energy calibration. Several radioactive sources such as 137Cs, 54Mn, 68Ge, 65Zn, 60Co, Po-Be, Am-Be, and Cf-Ni, are used for the energy calibration of the RENO detectors. We obtained an energy conversion function from observed charges to prompt signal energy which describes a non-linear response due to the quenching effect in liquid scintillator and Cherenkov radiation. We have verified the performance of the energy calibration using copious betadecay events from radioactive isotopes B12 that are produced by cosmic-muon interaction. The energy calibration was performed for the target and gamma-catcher regions separately due to their different energy responses. In this presentation we describe the methods and results of the energy calibration.
NASA Technical Reports Server (NTRS)
Bate, T.; Calkins, D. E.; Price, P.; Veikins, O.
1971-01-01
Calibrator generates accurate flow velocities over wide range of gas pressure, temperature, and composition. Both pressure and flow velocity can be maintained within 0.25 percent. Instrument is essentially closed loop hydraulic system containing positive displacement drive.
A new method for automated dynamic calibration of tipping-bucket rain gauges
Humphrey, M.D.; Istok, J.D.; Lee, J.Y.; Hevesi, J.A.; Flint, A.L.
1997-01-01
Existing methods for dynamic calibration of tipping-bucket rain gauges (TBRs) can be time consuming and labor intensive. A new automated dynamic calibration system has been developed to calibrate TBRs with minimal effort. The system consists of a programmable pump, datalogger, digital balance, and computer. Calibration is performed in two steps: 1) pump calibration and 2) rain gauge calibration. Pump calibration ensures precise control of water flow rates delivered to the rain gauge funnel; rain gauge calibration ensures precise conversion of bucket tip times to actual rainfall rates. Calibration of the pump and one rain gauge for 10 selected pump rates typically requires about 8 h. Data files generated during rain gauge calibration are used to compute rainfall intensities and amounts from a record of bucket tip times collected in the field. The system was tested using 5 types of commercial TBRs (15.2-, 20.3-, and 30.5-cm diameters; 0.1-, 0.2-, and 1.0-mm resolutions) and using 14 TBRs of a single type (20.3-cm diameter; 0.1-mm resolution). Ten pump rates ranging from 3 to 154 mL min-1 were used to calibrate the TBRs and represented rainfall rates between 6 and 254 mm h-1 depending on the rain gauge diameter. All pump calibration results were very linear with R2 values greater than 0.99. All rain gauges exhibited large nonlinear underestimation errors (between 5% and 29%) that decreased with increasing rain gauge resolution and increased with increasing rainfall rate, especially for rates greater than 50 mm h-1. Calibration curves of bucket tip time against the reciprocal of the true pump rate for all rain gauges also were linear with R2 values of 0.99. Calibration data for the 14 rain gauges of the same type were very similar, as indicated by slope values that were within 14% of each other and ranged from about 367 to 417 s mm h-1. The developed system can calibrate TBRs efficiently, accurately, and virtually unattended and could be modified for use with other
Antenna Calibration and Measurement Equipment
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Cortes, Manuel Vazquez
2012-01-01
A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.
Souza, Margarida C; Martins, Valdomiro L; Almeida, Luciano F; Pessoa Neto, Osmundo D; Gaião, Edvaldo N; Araujo, Mario Cesar U
2010-08-15
An automatic method for kinetics independent spectrometric analysis is proposed in this study. It uses a non-linear calibration model that explores concentration gradients generated by a flow-batch analyser (FBA) for the samples, dye, and the single standard solution. The procedure for obtaining the gradients of the dye and standard solution is performed once at the beginning of analysis. The same procedure is applied thereafter for each sample. For illustration, the proposed automatic methodology was applied to determine total protein and albumin in blood serum by using the Biuret and Bromocresol Green (BCG) methods. The measurements were made by using a laboratory-made photometer based on a red and green bicolour LED (Light-Emitting Diode) and a phototransistor, coupled to a "Z" form flow cell. The sample throughput was about 50 h(-1) for albumin and 60 h(-1) for total protein, consuming about 7 microL of sample, 2.6 mL of BCG and 1.2 mL of biuret reagents for each determination. Applying the paired t-test for results from the proposed analyser and the reference method, no statistic differences at 95% confidence level were found. The absolute standard deviation was usually smaller than 0.2 g dL(-1). The proposed method is valuable for the determination of total protein and albumin; and can also be used in other determinations where kinetic effects may or may not exist.
New approach to calibrating bed load samplers
Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.
1985-01-01
Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.
NASA Astrophysics Data System (ADS)
Wurz, Peter; Balogh, Andre; Coffey, Victoria; Dichter, Bronislaw K.; Kasprzak, Wayne T.; Lazarus, Alan J.; Lennartsson, Walter; McFadden, James P.
Calibration and characterization of particle instruments with supporting flight electronics is necessary for the correct interpretation of the returned data. Generally speaking, the instrument will always return a measurement value (typically in form of a digital number), for example a count rate, for the measurement of an external quantity, which could be an ambient neutral gas density, an ion composition (species measured and amount), or electron density. The returned values are used then to derive parameters associated with the distribution such as temperature, bulk flow speed, differential energy flux and others. With the calibration of the instrument the direct relationship between the external quantity and the returned measurement value has to be established so that the data recorded during flight can be correctly interpreted. While calibration and characterization of an instrument are usually done in ground-based laboratories prior to integration of the instrument in the spacecraft, it can also be done in space.
NASA Astrophysics Data System (ADS)
Gluzman, Igal; Cohen, Jacob; Oshman, Yaakov
2016-11-01
We introduce a statistical method based on Gaussianization to estimate the nonlinear calibration curve of a hot-wire probe, that relates the input flow velocity to the output (measured) voltage. The method uses as input a measured sequence of voltage samples, corresponding to different unknown flow velocities in the desired operational range, and only two measured voltages along with their known (calibrated) flow velocities. The novel method is validated against standard calibration methods using data acquired by hot-wire probes using wind-tunnel experiments. We demonstrate our new calibration technique by placing the hot-wire probe at certain region downstream of a cube-shaped body in a free stream of air flow. For testing our calibration method we rely on flow statistics that exist, among others, in a certain region of a turbulent wake formed downstream of the cube-shaped body. The specific properties are: first, the velocity signal in the wake should be as close to Gaussian as possible. Second, the signal should cover the desired velocity range that should be calibrated. The appropriate region to place our probe is determined via computation of the first four statistical moments of the measured signals in different regions of the wake.
Traveling-Load Calibration of Grid-Array Transient Contact Stress Sensors
Kang, Lu; Baer, Thomas E.; Rudert, M. James; Pedersen, Douglas R.; Brown, Thomas D.
2010-01-01
Thin, pliant transducers with grid arrays of sensing elements (sensels) have been widely used for transient measurements of intra-articular contact stresses. Conventional calibration procedures for this class of sensors are based upon spatially uniform scaling of sensel output values so as to recover two known fiducial loads, physically applied with the sensor either compressed between platens or mounted in situ. Because of the nonlinearities involved, it is desirable to have the highest of those two calibration loadings be such that all individual sensels are engaged at/near the peak of their expected functional range. However, for many situations of practical interest, impracticably large total calibration forces would be required. We report development of a novel pneumatically actuated wringer-like calibration device, and companion iterative post-processing software, that bypasses this longstanding difficulty. Sensors passed through the rollers of this device experience constant-distribution traveling fiducial loads propagating across their surface, thus allowing efficient calibration of all sensels individually to contact stress levels that would be impracticably high to simultaneously apply to all sensels. Sensel-specific calibration curves are rapidly and easily generated using this new approach, and compare favorably to those obtained with less expeditious conventional platen-based protocols. PMID:20537651
NASA Technical Reports Server (NTRS)
Peay, Christopher S.; Palacios, David M.
2011-01-01
Calibrate_Image calibrates images obtained from focal plane arrays so that the output image more accurately represents the observed scene. The function takes as input a degraded image along with a flat field image and a dark frame image produced by the focal plane array and outputs a corrected image. The three most prominent sources of image degradation are corrected for: dark current accumulation, gain non-uniformity across the focal plane array, and hot and/or dead pixels in the array. In the corrected output image the dark current is subtracted, the gain variation is equalized, and values for hot and dead pixels are estimated, using bicubic interpolation techniques.
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
Nonlinearly viscoelastic analysis of asphalt mixes subjected to shear loading
NASA Astrophysics Data System (ADS)
Huang, Chien-Wei; Masad, Eyad; Muliana, Anastasia H.; Bahia, Hussain
2007-06-01
This study presents the characterization of the nonlinearly viscoelastic behavior of hot mix asphalt (HMA) at different temperatures and strain levels using Schapery’s model. A recursive-iterative numerical algorithm is generated for the nonlinearly viscoelastic response and implemented in a displacement-based finite element (FE) code. Then, this model is employed to describe experimental frequency sweep measurements of two asphalt mixes with fine and coarse gradations under several combined temperatures and shear strain levels. The frequency sweep measurements are converted to creep responses in the time domain using a phenomenological model (Prony series). The master curve is created for each strain level using the time temperature superposition principle (TTSP) with a reference temperature of 40°C. The linear time-dependent parameters of the Prony series are first determined by fitting a master curve created at the lowest strain level, which in this case is 0.01%. The measurements at strain levels higher than 0.01% are analyzed and used to determine the nonlinear parameters. These parameters are shown to increase with increasing strain levels, while the time temperature shift function is found to be independent of strain levels. The FE model with the calibrated time-dependent and nonlinear material parameters is used to simulate the creep experimental tests, and reasonable predictions are shown.
Multipulse phase resetting curves
NASA Astrophysics Data System (ADS)
Krishnan, Giri P.; Bazhenov, Maxim; Pikovsky, Arkady
2013-10-01
In this paper, we introduce and study systematically, in terms of phase response curves, the effect of dual-pulse excitation on the dynamics of an autonomous oscillator. Specifically, we test the deviations from linear summation of phase advances resulting from two small perturbations. We analytically derive a correction term, which generally appears for oscillators whose intrinsic dimensionality is >1. The nonlinear correction term is found to be proportional to the square of the perturbation. We demonstrate this effect in the Stuart-Landau model and in various higher dimensional neuronal models. This deviation from the superposition principle needs to be taken into account in studies of networks of pulse-coupled oscillators. Further, this deviation could be used in the verification of oscillator models via a dual-pulse excitation.
FY2008 Calibration Systems Final Report
Cannon, Bret D.; Myers, Tanya L.; Broocks, Bryan T.
2009-01-01
The Calibrations project has been exploring alternative technologies for calibration of passive sensors in the infrared (IR) spectral region. In particular, we have investigated using quantum cascade lasers (QCLs) because these devices offer several advantages over conventional blackbodies such as reductions in size and weight while providing a spectral source in the IR with high output power. These devices can provide a rapid, multi-level radiance scheme to fit any nonlinear behavior as well as a spectral calibration that includes the fore-optics, which is currently not available for on-board calibration systems.
NASA Astrophysics Data System (ADS)
Regnault, N.
2015-08-01
The Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) is a massive imaging survey, conducted between 2003 and 2008, with the MegaCam instrument, mounted on the CFHT-3.6-m telescope. With a 1 degree wide focal plane, made of 36 2048 × 4612 sensors totalling 340 megapixels, MegaCam was at the time the largest imager on the sky. The Supernova Legacy Survey (SNLS) uses the cadenced observations of the 4 deg2 wide "DEEP" layer of the CFHTLS to search and follow-up Type Ia supernovae (SNe Ia) and study the acceleration of the cosmic expansion. The reduction and calibration of the CFHTLS/SNLS datasets has posed a series of challenges. In what follows, we give a brief account of the photometric calibration work that has been performed on the SNLS data over the last decade.
Statistical calibration via Gaussianization in hot-wire anemometry
NASA Astrophysics Data System (ADS)
Gluzman, Igal; Cohen, Jacob; Oshman, Yaakov
2017-03-01
A statistical method is introduced, that is based on Gaussianization to estimate the nonlinear calibration curve of a hot-wire probe, relating the input flow velocity to the output (measured) voltage. The method uses as input a measured sequence of voltage samples, corresponding to different unknown flow velocities in the desired operational range, and only two measured voltages along with their known (calibrated) flow velocities. The method relies on the conditions that (1) the velocity signal is Gaussian distributed (or has another known distribution), and (2) the measured signal covers the desired velocity range over which the sensor is to be calibrated. The novel calibration method is validated against standard calibration methods using data acquired by hot-wire probes in wind-tunnel experiments. In these experiments, a hot-wire probe is placed at a certain region downstream of a cube-shaped body in a freestream of air flow, properly selected, so that the central limit theorem, when applied to the random velocity increments composing the instantaneous velocity in the wake, roughly holds, and renders the measured signal nearly Gaussian distributed. The statistical distribution of the velocity field in the wake is validated by mapping the first four statistical moments of the measured signals in different regions of the wake and comparing them with corresponding moments of the Gaussian distribution. The experimental data are used to evaluate the sensitivity of the method to the distribution of the measured signal, and the method is demonstrated to possess some robustness with respect to deviations from the Gaussian distribution.
NASA Astrophysics Data System (ADS)
Wang, A. L.
2013-12-01
Accuracy of temperature measurements is vital to many experiments. In this project, we design an algorithm to calibrate thermocouples' temperature measurements. To collect data, we rely on incremental heating to calculate the diffusion coefficients of argon through sanidine glasses. These coefficients change according to an arrhenius equation that depends on temperature, time, and the size and geometry of the glass; thus by fixing the type of glass and the time of each heating step, we obtain many data points by varying temperature. Because the dimension of temperature is continuous, obtaining data is simpler in noble gas diffusion experiments than in measuring the discrete melting points of various metals. Due to the nature of electrical connections, the need to reference to the freezing point of ice, thermal gradients in the sample, the time dependent dissipation of heat into the surroundings, and other inaccuracies with thermocouple temperature measurements, it is necessary to calibrate the experimental measurements with the expected or theoretical measurements. Since the diffusion constant equation is exponential with the inverse of temperature, we transform the exponential D vs T graph into a linear log(D) vs 1/T graph. Then a simple linear regression yields the equation of the line, and we find a mapping function from the experimental temperature to the expected temperature. By relying on the accuracy of the diffusion constant measurement, the mapping function provides the temperature calibration. Theoretical (Temperature, Diffusion Coefficient, Fractional Loss, Zeta)
A Bionic Polarization Navigation Sensor and Its Calibration Method
Zhao, Huijie; Xu, Wujian
2016-01-01
The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects’ polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor’s signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation. PMID:27527171
SOFIE instrument ground calibration
NASA Astrophysics Data System (ADS)
Hansen, Scott; Fish, Chad; Romrell, Devin; Gordley, Larry; Hervig, Mark
2006-08-01
Space Dynamics Laboratory (SDL), in partnership with GATS, Inc., designed and built an instrument to conduct the Solar Occultation for Ice Experiment (SOFIE). SOFIE is the primary infrared sensor in the NASA Aeronomy of Ice in the Mesosphere (AIM) instrument suite. AIM's mission is to study polar mesospheric clouds (PMCs). SOFIE will make measurements in 16 separate spectral bands, arranged in eight pairs between 0.29 and 5.3 μm. Each band pair will provide differential absorption limb-path transmission profiles for an atmospheric component of interest, by observing the sun through the limb of the atmosphere during solar occultation as AIM orbits Earth. A pointing mirror and imaging sun sensor coaligned with the detectors are used to track the sun during occultation events and maintain stable alignment of the sun on the detectors. Ground calibration experiments were performed to measure SOFIE end-to-end relative spectral response, nonlinearity, and spatial characteristics. SDL's multifunction infrared calibrator #1 (MIC1) was used to present sources to the instrument for calibration. Relative spectral response (RSR) measurements were performed using a step-scan Fourier transform spectrometer (FTS). Out-of-band RSR was measured to approximately 0.01% of in-band peak response using the cascaded filter Fourier transform spectrometer (CFFTS) method. Linearity calibration was performed using a calcium fluoride attenuator in combination with a 3000K blackbody. Spatial characterization was accomplished using a point source and the MIC1 pointing mirror. SOFIE sun sensor tracking algorithms were verified using a heliostat and relay mirrors to observe the sun from the ground. These techniques are described in detail, and resulting SOFIE performance parameters are presented.
ERIC Educational Resources Information Center
Yates, Robert C.
This volume, a reprinting of a classic first published in 1952, presents detailed discussions of 26 curves or families of curves, and 17 analytic systems of curves. For each curve the author provides a historical note, a sketch or sketches, a description of the curve, a discussion of pertinent facts, and a bibliography. Depending upon the curve,…
Relative Locality in Curved Spacetime
NASA Astrophysics Data System (ADS)
Kowalski-Glikman, Jerzy; Rosati, Giacomo
2013-07-01
In this paper we construct the action describing dynamics of the particle moving in curved spacetime, with a nontrivial momentum space geometry. Curved momentum space is the core feature of theories where relative locality effects are present. So far aspects of nonlinearities in momentum space have been studied only for flat or constantly expanding (de Sitter) spacetimes, relying on their maximally symmetric nature. The extension of curved momentum space frameworks to arbitrary spacetime geometries could be relevant for the opportunities to test Planck-scale curvature/deformation of particles momentum space. As a first example of this construction we describe the particle with κ-Poincaré momentum space on a circular orbit in Schwarzschild spacetime, where the contributes of momentum space curvature turn out to be negligible. The analysis of this problem relies crucially on the solution of the soccer ball problem.
40 CFR 89.321 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... all normally used instrument ranges. New calibration curves need not be generated each month if the...) Calibrate on each normally used operating range with NO-in-N2 calibration gases with nominal...
The In-Flight Calibration Program for the XRS on Astro-E2
NASA Technical Reports Server (NTRS)
Cottam, J.; Kilbourne, C. A.
2004-01-01
The X-Ray Spectrometer (XRS) will be launched in February 2005 as part of the Astro-E2 mission. It will provide unprecedented throughput and resolving powers particularly at high energies. In this presentation we will describe the in-flight calibration program. The energy scale of the XRS is a complex, non-linear function of the noise and power conditions on the array. It will be calibrated empirically using the bright point sources, Capella and GX301-2. Ground calibration of the line spread function show it to be almost perfectly Gaussian. The in-flight calibration is designed to verify this using the energy scale targets. The effective area curve of the XRS contains discreet edge structure from the mirrors, the optical blocking filters, and the microcalorimeter HgTe absorbers. The effective area Calibration program will simultaneously measure these absorption edges and the global effective area properties using the relatively featureless sources 3C273 and Mrk421. Additional monitoring of any ice build-up on the filters will be conducted using observations of the supernova remnants N132D and E0102.
The In-Flight Calibration Program for the XRS on Astro-E2
NASA Astrophysics Data System (ADS)
Cottam, J.; Kilbourne, C. A.; XRS Instrument Team
2004-08-01
The X-ray Spectrometer (XRS) will be launched in February 2005 as part of the Astro-E2 mission. It will provide unprecedented throughput and resolving powers particularly at high energies. In this presentation we will describe the in-flight calibration program. The energy scale of the XRS is a complex non-linear function of the noise and power conditions on the array. It will be calibrated empirically using the bright point sources, Capella and GX301-2. Ground calibration of the line spread function shows it to be almost perfectly Gaussian. The in-flight calibration is designed to verify this using the energy scale targets. The effective area curve of the XRS contains discrete edge structure from the mirrors, the optical blocking filters, and the microcalorimeter HgTe absorbers. The effective area calibration program will simultaneously measure these absorption edges and the global effective area properties using the relatively featureless sources 3C273 and Mrk421. Additional monitoring of any ice build up on the filters will be conducted using observations of the supernova remnants N132D and E0102.
NASA Astrophysics Data System (ADS)
Zaconte, V.; Altea Team
The ALTEA project is aimed at studying the possible functional damages to the Central Nervous System (CNS) due to particle radiation in space environment. The project is an international and multi-disciplinary collaboration. The ALTEA facility is an helmet-shaped device that will study concurrently the passage of cosmic radiation through the brain, the functional status of the visual system and the electrophysiological dynamics of the cortical activity. The basic instrumentation is composed by six active particle telescopes, one ElectroEncephaloGraph (EEG), a visual stimulator and a pushbutton. The telescopes are able to detect the passage of each particle measuring its energy, trajectory and released energy into the brain and identifying nuclear species. The EEG and the Visual Stimulator are able to measure the functional status of the visual system, the cortical electrophysiological activity, and to look for a correlation between incident particles, brain activity and Light Flash perceptions. These basic instruments can be used separately or in any combination, permitting several different experiments. ALTEA is scheduled to fly in the International Space Station (ISS) in November, 15th 2004. In this paper the calibration of the Flight Model of the silicon telescopes (Silicon Detector Units - SDUs) will be shown. These measures have been taken at the GSI heavy ion accelerator in Darmstadt. First calibration has been taken out in November 2003 on the SDU-FM1 using C nuclei at different energies: 100, 150, 400 and 600 Mev/n. We performed a complete beam scan of the SDU-FM1 to check functionality and homogeneity of all strips of silicon detector planes, for each beam energy we collected data to achieve good statistics and finally we put two different thickness of Aluminium and Plexiglas in front of the detector in order to study fragmentations. This test has been carried out with a Test Equipment to simulate the Digital Acquisition Unit (DAU). We are scheduled to
CALUX measurements: statistical inferences for the dose-response curve.
Elskens, M; Baston, D S; Stumpf, C; Haedrich, J; Keupers, I; Croes, K; Denison, M S; Baeyens, W; Goeyens, L
2011-09-30
Chemical Activated LUciferase gene eXpression [CALUX] is a reporter gene mammalian cell bioassay used for detection and semi-quantitative analyses of dioxin-like compounds. CALUX dose-response curves for 2,3,7,8-tetrachlorodibenzo-p-dioxin [TCDD] are typically smooth and sigmoidal when the dose is portrayed on a logarithmic scale. Non-linear regression models are used to calibrate the CALUX response versus TCDD standards and to convert the sample response into Bioanalytical EQuivalents (BEQs). Several complications may arise in terms of statistical inference, specifically and most important is the uncertainty assessment of the predicted BEQ. This paper presents the use of linear calibration functions based on Box-Cox transformations to overcome the issue of uncertainty assessment. Main issues being addressed are (i) confidence and prediction intervals for the CALUX response, (ii) confidence and prediction intervals for the predicted BEQ-value, and (iii) detection/estimation capabilities for the sigmoid and linearized models. Statistical comparisons between different calculation methods involving inverse prediction, effective concentration ratios (ECR(20-50-80)) and slope ratio were achieved with example datasets in order to provide guidance for optimizing BEQ determinations and expand assay performance with the recombinant mouse hepatoma CALUX cell line H1L6.1c3.
Morrison, H; Menon, G; Sloboda, R
2014-08-15
The purpose of this study was to investigate the accuracy of radiochromic film calibration procedures used in external beam radiotherapy when applied to I-125 brachytherapy sources delivering higher doses, and to determine any necessary modifications to achieve similar accuracy in absolute dose measurements. GafChromic EBT3 film was used to measure radiation doses upwards of 35 Gy from 6 MV, 75 kVp and (∼28 keV) I-125 photon sources. A custom phantom was used for the I-125 irradiations to obtain a larger film area with nearly constant dose to reduce the effects of film heterogeneities on the optical density (OD) measurements. RGB transmission images were obtained with an Epson 10000XL flatbed scanner, and calibration curves relating OD and dose using a rational function were determined for each colour channel and at each energy using a non-linear least square minimization method. Differences found between the 6 MV calibration curve and those for the lower energy sources are large enough that 6 MV beams should not be used to calibrate film for low-energy sources. However, differences between the 75 kVp and I-125 calibration curves were quite small; indicating that 75 kVp is a good choice. Compared with I-125 irradiation, this gives the advantages of lower type B uncertainties and markedly reduced irradiation time. To obtain high accuracy calibration for the dose range up to 35 Gy, two-segment piece-wise fitting was required. This yielded absolute dose measurement accuracy above 1 Gy of ∼2% for 75 kVp and ∼5% for I-125 seed exposures.
TS - Dean interactions in curved channel flow
NASA Technical Reports Server (NTRS)
Singer, Bart A.; Zang, Thomas A.; Erlebacher, Gordon
1990-01-01
A weakly nonlinear theory is developed to study the interaction of TS waves and Dean vortices in curved channel flow. The prediction obtained from the theory agree well with results obtained from direct numerical simulations of curved channel flow, especially for low amplitude disturbances. At low Reynolds numbers the wave interaction is generally stabilizing to both disturbances, though as the Reynolds number increases, many linearly unstable TS waves are further destabilized by the presence of Dean vortices.
NASA Astrophysics Data System (ADS)
Greywall, Dennis S.; Busch, Paul A.
1982-03-01
Precise measurements of the P-T relation along the melting curve of3He have been made for 8≲ T≲330 mK. The results are in excellent agreement with other precise data for temperatures near the extremes of this range. A best-fit relation is provided which describes the melting curve to within ±1 mbar between the superfluid A transition and the pressure minimum. Detailed descriptions of the melting curve and magnetic thermometers used for the calibration are also given.
SOFIE instrument ground calibration update
NASA Astrophysics Data System (ADS)
Hansen, Scott; Fish, Chad; Shumway, Andrew; Gordley, Larry; Hervig, Mark
2007-09-01
Space Dynamics Laboratory (SDL), in partnership with GATS, Inc., designed and built an instrument to conduct the Solar Occultation for Ice Experiment (SOFIE). SOFIE is an infrared sensor in the NASA Aeronomy of Ice in the Mesosphere (AIM) instrument suite. AIM's mission is to study polar mesospheric clouds (PMCs). SOFIE will make measurements in 16 separate spectral bands, arranged in 8 pairs between 0.29 and 5.3 μm. Each band pair will provide differential absorption limb-path transmission profiles for an atmospheric component of interest, by observing the sun through the limb of the atmsophere during solar occulation as AIM orbits Earth. The AIM mission was launched in April, 2007. SOFIE originally completed calibration and was delivered in March 2006. The design originally included a steering mirror coaligned with the science detectors to track the sun during occultation events. During spacecraft integration, a test anomaly resulted in damage to the steering mirror mechanism, resulting in the removal of this hardware from the instrument. Subsequently, additional ground calibration experiments were performed to validate the sensor performance following the change. Measurements performed in this additional phase of calibration testing included SOFIE end-to-end relative spectral response, nonlinearity, and spatial characterization. SDL's multifunction infrared calibrator #1 (MIC1) was used to present sources to the instrument for calibration. Relative spectral response (RSR) measurements were performed using a step-scan Fourier transform spectrometer (FTS). Out-of-band RSR was measured to approximately 0.01% of in-band peak response using the cascaded filter Fourier transform spectrometer (CFFTS) method. Linearity calibration was performed using a calcium fluoride attenuator in combination with a 3000K blackbody. Spatial characterization was accomplished using a point source and the MIC1 pointing mirror. These techniques are described in detail, and resulting
Curved Finite Elements and Curve Approximation
NASA Technical Reports Server (NTRS)
Baart, M. L.
1985-01-01
The approximation of parameterized curves by segments of parabolas that pass through the endpoints of each curve segment arises naturally in all quadratic isoparametric transformations. While not as popular as cubics in curve design problems, the use of parabolas allows the introduction of a geometric measure of the discrepancy between given and approximating curves. The free parameters of the parabola may be used to optimize the fit, and constraints that prevent overspill and curve degeneracy are introduced. This leads to a constrained optimization problem in two varibles that can be solved quickly and reliably by a simple method that takes advantage of the special structure of the problem. For applications in the field of computer-aided design, the given curves are often cubic polynomials, and the coefficient may be calculated in closed form in terms of polynomial coefficients by using a symbolic machine language so that families of curves can be approximated with no further integration. For general curves, numerical quadrature may be used, as in the implementation where the Romberg quadrature is applied. The coefficient functions C sub 1 (gamma) and C sub 2 (gamma) are expanded as polynomials in gamma, so that for given A(s) and B(s) the integrations need only be done once. The method was used to find optimal constrained parabolic approximation to a wide variety of given curves.
Nonlinear Growth Models in M"plus" and SAS
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam
2009-01-01
Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…
Auto calibration of a cone-beam-CT
Gross, Daniel; Heil, Ulrich; Schulze, Ralf; Schoemer, Elmar; Schwanecke, Ulrich
2012-10-15
Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferably form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT device to
Auto calibration of a cone-beam-CT.
Gross, Daniel; Heil, Ulrich; Schulze, Ralf; Schoemer, Elmar; Schwanecke, Ulrich
2012-10-01
This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferably form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, "Geometric misalignment and calibration in cone-beam tomography," Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, "A geometric calibration method for cone beam CT systems," Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT device to demonstrate the
Traceable Pyrgeometer Calibrations
Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina
2016-05-02
This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
Calibration of sound calibrators: an overview
NASA Astrophysics Data System (ADS)
Milhomem, T. A. B.; Soares, Z. M. D.
2016-07-01
This paper presents an overview of calibration of sound calibrators. Initially, traditional calibration methods are presented. Following, the international standard IEC 60942 is discussed emphasizing parameters, target measurement uncertainty and criteria for conformance to the requirements of the standard. Last, Regional Metrology Organizations comparisons are summarized.
Calibrated automated thrombin generation measurement in clotting plasma.
Hemker, H Coenraad; Giesen, Peter; Al Dieri, Raed; Regnault, Véronique; de Smedt, Eric; Wagenvoord, Rob; Lecompte, Thomas; Béguin, Suzette
2003-01-01
Calibrated automated thrombography displays the concentration of thrombin in clotting plasma with or without platelets (platelet-rich plasma/platelet-poor plasma, PRP/PPP) in up to 48 samples by monitoring the splitting of a fluorogenic substrate and comparing it to a constant known thrombin activity in a parallel, non-clotting sample. Thus, the non-linearity of the reaction rate with thrombin concentration is compensated for, and adding an excess of substrate can be avoided. Standard conditions were established at which acceptable experimental variation accompanies sensitivity to pathological changes. The coefficients of variation of the surface under the curve (endogenous thrombin potential) are: within experiment approximately 3%; intra-individual: <5% in PPP, <8% in PRP; interindividual 15% in PPP and 19% in PRP. In PPP, calibrated automated thrombography shows all clotting factor deficiencies (except factor XIII) and the effect of all anticoagulants [AVK, heparin(-likes), direct inhibitors]. In PRP, it is diminished in von Willebrand's disease, but it also shows the effect of platelet inhibitors (e.g. aspirin and abciximab). Addition of activated protein C (APC) or thrombomodulin inhibits thrombin generation and reflects disorders of the APC system (congenital and acquired resistance, deficiencies and lupus antibodies) independent of concomitant inhibition of the procoagulant pathway as for example by anticoagulants. Copyright 2003 S. Karger AG, Basel
Methods and guidelines for effective model calibration
Hill, M.C.
2004-01-01
This paper briefly describes nonlinear regression methods, a set of 14 guidelines for model calibration, how they are implemented in and supported by two public domain computer programs, and a demonstration and a test of the methods and guidelines. Copyright ASCE 2004.
Using self-calibrating thermocouples in industry
Ruppel, F.R.
1989-01-01
The self-calibrating thermocouple is a thermocouple with a low melting point, high-purity metal encapsulated near but metallurgically isolated from its thermojunction. It is designed to provide a single-point calibration of the thermocouple at the melting point of the encapsulated metal because the time-temperature curve of the thermocouple will plateau at this temperature during heating or cooling. The calibration procedure consists of comparing the plateau temperature with the known melting point temperature of the encapsulated metal. The difference between these two values is the thermocouple error at the calibration point. The device is commercially available, but to be effective in industry it must be augmented with a data acquisition system with an algorithm that will automatically report the calibration error. 5 refs., 7 figs.
Calibration system for albedo neutron dosimeters
Rothermich, N.E.
1981-01-01
Albedo neutron dosimeters have proven to be effective as a method of measuring the dose from neutron exposures that other types of neutron detectors cannot measure. Results of research conducted to calibrate an albedo neutron dosemeter are presented. The calibration procedure consisted of exposing the TLD chips to a 46 curie /sup 238/PuBe source at known distances, dose rates and exposure periods. The response of the TLD's is related to the dose rate measured with a dose rate meter to obtain the calibration factor. This calibration factor is then related to the ratio of the counting rates determined by 9-inch and 3-inch Bonner spheres (also called remmeters) and a calibration curve was determined. 17 references, 10 figures, 3 tables.
Research on calibration method of relative infrared radiometer
NASA Astrophysics Data System (ADS)
Yang, Sen; Li, Chengwei
2016-02-01
The Relative Infrared Radiometer (RIR) is commonly used to measure the irradiance of the Infrared Target Simulator (ITS), and the calibration of the RIR is central for the measurement accuracy. RIR calibration is conventionally performed using the Radiance Based (RB) calibration method or Irradiance Based (IB) calibration method, and the relationship between the radiation of standard source and the response of RIR is determined by curve fitting. One limitation existing in the calibration of RIR is the undesirable calibration voltage fluctuation in single measurement or in the reproducibility measurement, which reduces the calibration reproducibility and irradiance measurement accuracy. To address this limitation, the Equivalent Blackbody Temperature Based (EBTB) calibration method is proposed for the calibration of RIR. The purpose of this study is to compare the proposed EBTB calibration method with conventional RB and IB calibration methods. The comparison and experiment results have shown that the EBTB calibration method is not only able to provide comparable correlation between radiation and response to other calibration methods (IB and RB) in the irradiance measurement but also reduces the influence of calibration voltage fluctuation on the irradiance measurement result, which improves the calibration reproducibility and irradiance measurement accuracy.
Research on calibration method of relative infrared radiometer.
Yang, Sen; Li, Chengwei
2016-02-01
The Relative Infrared Radiometer (RIR) is commonly used to measure the irradiance of the Infrared Target Simulator (ITS), and the calibration of the RIR is central for the measurement accuracy. RIR calibration is conventionally performed using the Radiance Based (RB) calibration method or Irradiance Based (IB) calibration method, and the relationship between the radiation of standard source and the response of RIR is determined by curve fitting. One limitation existing in the calibration of RIR is the undesirable calibration voltage fluctuation in single measurement or in the reproducibility measurement, which reduces the calibration reproducibility and irradiance measurement accuracy. To address this limitation, the Equivalent Blackbody Temperature Based (EBTB) calibration method is proposed for the calibration of RIR. The purpose of this study is to compare the proposed EBTB calibration method with conventional RB and IB calibration methods. The comparison and experiment results have shown that the EBTB calibration method is not only able to provide comparable correlation between radiation and response to other calibration methods (IB and RB) in the irradiance measurement but also reduces the influence of calibration voltage fluctuation on the irradiance measurement result, which improves the calibration reproducibility and irradiance measurement accuracy.
Experimental simulation of closed timelike curves.
Ringbauer, Martin; Broome, Matthew A; Myers, Casey R; White, Andrew G; Ralph, Timothy C
2014-06-19
Closed timelike curves are among the most controversial features of modern physics. As legitimate solutions to Einstein's field equations, they allow for time travel, which instinctively seems paradoxical. However, in the quantum regime these paradoxes can be resolved, leaving closed timelike curves consistent with relativity. The study of these systems therefore provides valuable insight into nonlinearities and the emergence of causal structures in quantum mechanics--essential for any formulation of a quantum theory of gravity. Here we experimentally simulate the nonlinear behaviour of a qubit interacting unitarily with an older version of itself, addressing some of the fascinating effects that arise in systems traversing a closed timelike curve. These include perfect discrimination of non-orthogonal states and, most intriguingly, the ability to distinguish nominally equivalent ways of preparing pure quantum states. Finally, we examine the dependence of these effects on the initial qubit state, the form of the unitary interaction and the influence of decoherence.
Experimental simulation of closed timelike curves
NASA Astrophysics Data System (ADS)
Ringbauer, Martin; Broome, Matthew A.; Myers, Casey R.; White, Andrew G.; Ralph, Timothy C.
2014-06-01
Closed timelike curves are among the most controversial features of modern physics. As legitimate solutions to Einstein’s field equations, they allow for time travel, which instinctively seems paradoxical. However, in the quantum regime these paradoxes can be resolved, leaving closed timelike curves consistent with relativity. The study of these systems therefore provides valuable insight into nonlinearities and the emergence of causal structures in quantum mechanics—essential for any formulation of a quantum theory of gravity. Here we experimentally simulate the nonlinear behaviour of a qubit interacting unitarily with an older version of itself, addressing some of the fascinating effects that arise in systems traversing a closed timelike curve. These include perfect discrimination of non-orthogonal states and, most intriguingly, the ability to distinguish nominally equivalent ways of preparing pure quantum states. Finally, we examine the dependence of these effects on the initial qubit state, the form of the unitary interaction and the influence of decoherence.
FTIR Calibration Methods and Issues
NASA Astrophysics Data System (ADS)
Perron, Gaetan
points complex calibration algorithm, detector non-linearity, pointing errors, pointing jitters, fringe count errors, spikes and ice contamination. They will be discussed and illustrated using real data. Finally, an outlook will be given for the future missions.
Definition of energy-calibrated spectra for national reachback
Kunz, Christopher L.; Hertz, Kristin L.
2014-01-01
Accurate energy calibration is critical for the timeliness and accuracy of analysis results of spectra submitted to National Reachback, particularly for the detection of threat items. Many spectra submitted for analysis include either a calibration spectrum using ^{137}Cs or no calibration spectrum at all. The single line provided by ^{137}Cs is insufficient to adequately calibrate nonlinear spectra. A calibration source that provides several lines that are well-spaced, from the low energy cutoff to the full energy range of the detector, is needed for a satisfactory energy calibration. This paper defines the requirements of an energy calibration for the purposes of National Reachback, outlines a method to validate whether a given spectrum meets that definition, discusses general source considerations, and provides a specific operating procedure for calibrating the GR-135.
Nonlinear, discrete flood event models, 2. Assessment of statistical nonlinearity
NASA Astrophysics Data System (ADS)
Bates, Bryson C.
1988-05-01
The first paper (Part 1) of this series presented a Bayesian procedure for the estimation of parameters in nonlinear, discrete flood event models. Part 2 begins with a discussion of the concept of nonlinearity in parameter estimation, its consequences, and the need to assess its extent. Three measures of nonlinearity are considered. They are Beale's measure , a bias calculation , and maximum curvature measures . A case study is presented, using the model and data described in Part 1. The results show quite clearly that care is required in the application of all three measures to calibrated flood models, and in the interpretation of the measured values. Devised by Bates and Watts, 1980.
A Quick and Easy Multiple Use Calibration Curve Procedure
1988-08-01
atomic absorption spectroscopy for which there are twelve data points as shown In Table 1. Figure 2a is a plot of the data of Table 1. Figure 2b is a plot...ONR Contract N00014-83-K-0005 and NR-042544, -12- Table 1 Atomic Absorption Spectroscopy z y 0.0 .045 0.0 .047 0.0 .051 0.0 .054 .050 .08e4 .050 .087
LWIR Stellar Calibration: Infrared Spectral Curves for 30 Standard Stars
1991-04-10
Alpha Orionis (Betelgeuse), M2-l supergiant wkith circumstellar dust. 106 B-8 Beta Pegasi CO bands. 109 B-9 Spectrum of Alpha Scorpii (Antares). M2-1...M3 i) Y Cru � 13.74 02 0.3 (MI ll d Vir 3410 2.3b 0. 0.9 (M3 1it 2 Cun 3310 .75 08 00 (M4 II) o Lob 3310 2.16 0.7 23 (M4 III) p Per 3310 4.88 0.4... Pegasi (Scheat) Spectral Type: N12.5 II to III T ff 3470 K OW ( 6.933 x 10- 1’ X.l +~ k 22.91.)o.1 82 - Data. Fit, and Residuals for 13 Peg Wavelength
Technical Note: Calibrating radiochromic film in beams of uncertain quality.
Peet, Samuel C; Wilks, Rachael; Kairn, Tanya; Trapp, Jamie V; Crowe, Scott B
2016-10-01
The dose-response of radiochromic film has been shown to be dependent on the quality of the incident radiation, particularly at low energies. Difficulty therefore arises when a calibration is required for radiation of uncertain energy. This study investigates the ability of a recently published calibration method [see M. Tamponi et al., "A new form of the calibration curve in radiochromic dosimetry. Properties and results," Med. Phys. 43, 4435-4446 (2016)] to reduce the energy-dependence of radiochromic film. This allows for corrections to be applied that may improve the accuracy and precision of measurements taken in beams of uncertain energy or where the beam quality is known but calibration doses cannot be delivered. Gafchromic EBT3 film was irradiated with a range of superficial, orthovoltage, and high-energy photon beams. Calibrations were then applied using a typical net optical density approach and compared with the Tamponi et al. method that instead defines the response as a ratio of two net optical densities. To quantify the energy dependence, the response at each beam quality and dose was then normalized to the response at a preselected reference quality. This resulted in a relative measure that could be used to correct the calibration curve at the reference beam quality to any other quality of interest. The Tamponi et al. calibration method resulted in substantially less energy dependence compared to the standard net optical density approach, without compromising the calibration fit. The maximum deviation from the reference beam calibration curve was 7% across the range of energies and doses analyzed, reducing to <3% for doses greater than 200 cGy. However, the ability of the calibration curve to fit the data deteriorated as the curve was refitted with measurements at higher doses than those originally studied. The Tamponi et al. calibration method, based on the ratio of two net optical densities, considerably reduces the energy dependence of
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
From principal curves to granular principal curves.
Zhang, Hongyun; Pedrycz, Witold; Miao, Duoqian; Wei, Zhihua
2014-06-01
Principal curves arising as an essential construct in dimensionality reduction and data analysis have recently attracted much attention from theoretical as well as practical perspective. In many real-world situations, however, the efficiency of existing principal curves algorithms is often arguable, in particular when dealing with massive data owing to the associated high computational complexity. A certain drawback of these constructs stems from the fact that in several applications principal curves cannot fully capture some essential problem-oriented facets of the data dealing with width, aspect ratio, width change, etc. Information granulation is a powerful tool supporting processing and interpreting massive data. In this paper, invoking the underlying ideas of information granulation, we propose a granular principal curves approach, regarded as an extension of principal curves algorithms, to improve efficiency and achieve a sound accuracy-efficiency tradeoff. First, large amounts of numerical data are granulated into C intervals-information granules developed with the use of fuzzy C-means clustering and the two criteria of information granulation, which significantly reduce the amount of data to be processed at the later phase of the overall design. Granular principal curves are then constructed by determining the upper and the lower bounds of the interval data. Finally, we develop an objective function using the criteria of information confidence and specificity to evaluate the granular output formed by the principal curves. We also optimize the granular principal curves by adjusting the level of information granularity (the number of clusters), which is realized with the aid of the particle swarm optimization. A number of numeric studies completed for synthetic and real-world datasets provide a useful quantifiable insight into the effectiveness of the proposed algorithm.
Method of biodosimeter calibration for orbital flight
NASA Astrophysics Data System (ADS)
Vladislav, Petrov
A biodosimetry method, based on estimation of an absorbed dose on the basis of assessment of unstable aberration frequency in the lymphocytes of human's peripheral blood is used broadly in ground conditions for analysis of accidental exposure to personal and citizens. A calibration curve giving the relationship between aberration frequency (generally dicentrics and centric rings) and an absorbed dose in blood samples is used for assessment of crewmember exposure. As a rule gamma - rays corresponding to character of exposure in such accidents are used for these goals. At the same time the space radiation fields are formed mainly by charged particles for which the character of effect on the body cells and tissues differs strongly from that of gamma - rays. As biodosimetry is a relative method of dose measurement it is necessary to obtain a calibration curve corresponding to the conditions in which the measurements will be performed. That is a calibration curve for space application should give a relationship between aberration frequency and a dose formed by radiation field equal to that on the spacecraft trajectory. The report contains a method of obtaining a calibration curve for a case of an orbital flight on the ISS trajectory. The radiobiological basis of the method consists of relationships between chromosomal aberration frequency in human blood lymphocytes and an absorbed dose of protons with four energies (50 MeV, 150 MeV, 400 MeV, 625 MeV) obtained in the accelerator's experiments. Due to the fact that we had experimental data only for protons the calibration curve was obtained for the proton component on the ISS orbit which is mainly formed by trapped protons. Dose spectrum for this energy distribution of protons was calculated and weighting coefficients for taking into account the input of dose of protons with various energies in forming total frequency of chromosomal aberrations were obtained on its basis. The procedure of obtaining such weighting
Boyd, R.W. . Inst. of Optics)
1992-01-01
Nonlinear optics is the study of the interaction of intense laser light with matter. This book is a textbook on nonlinear optics at the level of a beginning graduate student. The intent of the book is to provide an introduction to the field of nonlinear optics that stresses fundamental concepts and that enables the student to go on to perform independent research in this field. This book covers the areas of nonlinear optics, quantum optics, quantum electronics, laser physics, electrooptics, and modern optics.
Interpolation Errors in Thermistor Calibration Equations
NASA Astrophysics Data System (ADS)
White, D. R.
2017-04-01
Thermistors are widely used temperature sensors capable of measurement uncertainties approaching those of standard platinum resistance thermometers. However, the extreme nonlinearity of thermistors means that complicated calibration equations are required to minimize the effects of interpolation errors and achieve low uncertainties. This study investigates the magnitude of interpolation errors as a function of temperature range and the number of terms in the calibration equation. Approximation theory is used to derive an expression for the interpolation error and indicates that the temperature range and the number of terms in the calibration equation are the key influence variables. Numerical experiments based on published resistance-temperature data confirm these conclusions and additionally give guidelines on the maximum and minimum interpolation error likely to occur for a given temperature range and number of terms in the calibration equation.
A dynamic calibration method for the pressure transducer
NASA Astrophysics Data System (ADS)
Wang, Zhongyu; Wang, Zhuoran; Li, Qiang
2016-01-01
Pressure transducer is widely used in the field of industry. A calibrated pressure transducer can increase the performance of precision instruments in the closed mechanical relationship. Calibration is the key to ensure the pressure transducer with a high precision and dynamic characteristic. Unfortunately, the current calibration method can usually be used in the laboratory with a good condition and only one pressure transducer can be calibrated at each time. Therefore the calibration efficiency is hard to meet the requirement of modern industry with high efficiency. A dynamic and fast calibration technology with a calibration device and a corresponding data processing method is proposed in this paper. Firstly, the pressure transducer to be calibrated is placed in the small cavity chamber. The calibration process only contains a single loop. The outputs of each calibrated transducer are recorded automatically by the control terminal. Secondly, LabView programming is used for the information acquisition and data processing. The performance of the repeatability and nonlinear indicators can be figured out directly. At last the pressure transducers are calibrated simultaneously in the experiment to verify the suggested calibration technology. The experimental result shows this method can be used to calibrate the pressure transducer in the practical engineering measurement.
1989-06-15
following surprising situation. Namely associated with the integrable nonlinear Schrodinger equations are standard numerical schemes which exhibit at...36. An Initial Boundary Value Problem for the Nonlinear Schrodinger Equations , A.S. Fokas, Physica D March 1989. 37. Evolution Theory, Periodic... gravity waves and wave excitation phenomena related to moving pressure distributions; numerical approximation and computation; nonlinear optics; and
Quantum computational complexity in the presence of closed timelike curves
Bacon, Dave
2004-09-01
Quantum computation with quantum data that can traverse closed timelike curves represents a new physical model of computation. We argue that a model of quantum computation in the presence of closed timelike curves can be formulated which represents a valid quantification of resources given the ability to construct compact regions of closed timelike curves. The notion of self-consistent evolution for quantum computers whose components follow closed timelike curves, as pointed out by Deutsch [Phys. Rev. D 44, 3197 (1991)], implies that the evolution of the chronology respecting components which interact with the closed timelike curve components is nonlinear. We demonstrate that this nonlinearity can be used to efficiently solve computational problems which are generally thought to be intractable. In particular we demonstrate that a quantum computer which has access to closed timelike curve qubits can solve NP-complete problems with only a polynomial number of quantum gates.
Revised Landsat-5 TM radiometric calibration procedures and postcalibration dynamic ranges
Chander, G.; Markham, B.
2003-01-01
Effective May 5, 2003, Landsat-5 (L5) Thematic Mapper (TM) data processed and distributed by the U.S. Geological Survey (USGS) Earth Resources Observation System (EROS) Data Center (EDC) will be radiometrically calibrated using a new procedure and revised calibration parameters. This change will improve absolute calibration accuracy, consistency over time, and consistency with Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) data. Users will need to use new parameters to convert the calibrated data products to radiance. The new procedure for the reflective bands (1-5,7) is based on a lifetime radiometric calibration curve for the instrument derived from the instrument's internal calibrator, cross-calibration with the ETM+, and vicarious measurements. The thermal band will continue to be calibrated using the internal calibrator. Further updates to improve the relative detector-to-detector calibration and thermal band calibration are being investigated, as is the calibration of the Landsat-4 (L4) TM.
Boyer, H.E.
1986-01-01
This book contains more than 500 fatigue curves for industrial ferrous and nonferrous alloys. It also includes a thorough explanation of fatigue testing and interpretation of test results. Each curve is presented independently and includes an explanation of its particular importance. The curves are titled by standard industrial designations (AISI, CDA, AA, etc.) of the metals, and a complete reference is given to the original source to facilitate further research. The collection includes standard S-N curves, curves showing effect of surface hardening on fatigue strength, crack growth-rate curves, curves comparing the fatigue strengths of various alloys, effect of variables (i,e, temperature, humidity, frequency, aging, environment, etc.) and much, much more. This one volume consolidates the fatigue data in a single source.
Enßlin, Torsten A; Junklewitz, Henrik; Winderling, Lars; Greiner, Maksim; Selig, Marco
2014-10-01
Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore, better schemes, in sense of minimal square error, can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise, the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that nonparametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.
Calibration of a slimehole density sonde using MCNPX
NASA Astrophysics Data System (ADS)
Won, Byeongho; Hwang, Seho; Shin, Jehyun; Kim, Jongman
2014-05-01
The density log is a well logging tool that can continuously record bulk density of the formation. This is widely applied for a variety of fields such as the petroleum exploitation, mineral exploration, and geotechnical survey and so on. The density log is normally applied to open holes. But there are frequently difficult conditions such as cased boreholes, the variation of borehole diameter, the borehole fluid salinity, and the stand-off and so on. So we need a density correction curves for the various borehole conditions. The primary calibration curve by manufacturer is used for the formation density calculation. In case of density log used for the oil industry, the calibration curves for various borehole environments are applied to the density correction, but commonly used slim-hole density logging sonde normally have a calibration curve for the variation of borehole diameter. In order to correct the various borehole environmental conditions, it is necessary to make the primary calibration curve of density sonde using numerical modeling. Numerical modeling serves as a low-cost substitute for experimental test pits. We have performed numerical modeling using the MCNP based on Monte-Carlo methods can record average behaviors of radiation particles. In this study, the work for matching the primary calibration curve of FDGS (Formation Density Gamma Sonde) for slime borehole with a 100 mCi 137 Cs gamma source was performed. On the basis of this work, correction curves in various borehole environments were produced.
Analysis of light curve of LP Camelopardalis
NASA Astrophysics Data System (ADS)
Prudil, Z.; Skarka, M.; Zejda, M.
2016-05-01
We present photometric analysis of the RRab type pulsating star LP Cam. The star was observed at Brno Observatory and Planetarium during nine nights. Measurements were calibrated to the Johnson photometric system. Four captured and thirteen previously published maxima timings allowed us to refine the pulsation period and the zero epoch. The light curve was Fourier decomposed to estimate physical parameters using empirical relations. Our results suggest that LP Cam is a common RR Lyrae star with high, almost solar metallicity.
Rashidian Vaziri, Mohammad Reza
2013-07-10
In this paper, the Z-scan theory for nonlocal nonlinear media has been further developed when nonlinear absorption and nonlinear refraction appear simultaneously. To this end, the nonlinear photoinduced phase shift between the impinging and outgoing Gaussian beams from a nonlocal nonlinear sample has been generalized. It is shown that this kind of phase shift will reduce correctly to its known counterpart for the case of pure refractive nonlinearity. Using this generalized form of phase shift, the basic formulas for closed- and open-aperture beam transmittances in the far field have been provided, and a simple procedure for interpreting the Z-scan results has been proposed. In this procedure, by separately performing open- and closed-aperture Z-scan experiments and using the represented relations for the far-field transmittances, one can measure the nonlinear absorption coefficient and nonlinear index of refraction as well as the order of nonlocality. Theoretically, it is shown that when the absorptive nonlinearity is present in addition to the refractive nonlinearity, the sample nonlocal response can noticeably suppress the peak and enhance the valley of the Z-scan closed-aperture transmittance curves, which is due to the nonlocal action's ability to change the beam transverse dimensions.
NASA Astrophysics Data System (ADS)
Geniet, F.; Leon, J.
2003-05-01
A nonlinear system possessing a natural forbidden band gap can transmit energy of a signal with a frequency in the gap, as recently shown for a nonlinear chain of coupled pendulums (Geniet and Leon 2002 Phys. Rev. Lett. 89 134102). This process of nonlinear supratransmission, occurring at a threshold that is exactly predictable in many cases, is shown to have a simple experimental realization with a mechanical chain of pendulums coupled by a coil spring. It is then analysed in more detail. First we go to different (nonintegrable) systems which do sustain nonlinear supratransmission. Then a Josephson transmission line (a one-dimensional array of short Josephson junctions coupled through superconducting wires) is shown to also sustain nonlinear supratransmission, though being related to a different class of boundary conditions, and despite the presence of damping, finiteness, and discreteness. Finally, the mechanism at the origin of nonlinear supratransmission is found to be a nonlinear instability, and this is briefly discussed here.
NASA Technical Reports Server (NTRS)
Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)
1998-01-01
A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.
Photometer calibration problem for extended astronomical sources
NASA Technical Reports Server (NTRS)
Muscari, J. A.
1975-01-01
Analysis of calibration tests for the Skylab experimental T027 photometer is used to show that if an instrument is focused at infinity, the uniform extended calibration source should be positioned at distances at least equal to the hyperfocal distance and should be large enough to fill the field of view. It is noted that the field depth can be increased by focusing the optical system at the hyperfocal distance and that this method of focusing reduces the needed diameter of the calibration source to half that of a system focused at infinity. Other calibration methods discussed includes determining the radiance responsivity distance and extrapolating the curve to larger distances as well as extensive mapping of the spatial response combined with the irradiance responsivity to obtain the radiance responsivity.
Efficient gradient calibration based on diffusion MRI
Teh, Irvin; Maguire, Mahon L.
2016-01-01
Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277
Solitons in curved space of constant curvature
Batz, Sascha; Peschel, Ulf
2010-05-15
We consider spatial solitons as, for example, self-confined optical beams in spaces of constant curvature, which are a natural generalization of flat space. Due to the symmetries of these spaces we are able to define respective dynamical parameters, for example, velocity and position. For positively curved space we find stable multiple-hump solitons as a continuation from the linear modes. In the case of negatively curved space we show that no localized solution exists and a bright soliton will always decay through a nonlinear tunneling process.
Meija, Juris; Pagliano, Enea; Mester, Zoltán
2014-09-02
Uncertainty of the result from the method of standard addition is often underestimated due to neglect of the covariance between the intercept and the slope. In order to simplify the data analysis from standard addition experiments, we propose x-y coordinate swapping in conventional linear regression. Unlike the ratio of the intercept and slope, which is the result of the traditional method of standard addition, the result of the inverse standard addition is obtained directly from the intercept of the swapped calibration line. Consequently, the uncertainty evaluation becomes markedly simpler. The method is also applicable to nonlinear curves, such as the quadratic model, without incurring any additional complexity.
Nonlinear oscillatory processes in wheeled vehicles
NASA Astrophysics Data System (ADS)
Mikhlin, Yu. V.; Mitrokhin, S. G.
2011-04-01
The free damped vibrations of a wheeled vehicle with independent suspension are analyzed with allowance for the nonlinear characteristics of the suspension springs and shock absorbers. The vibrations of a wheeled vehicle with a suspension with smooth nonlinear characteristics are studied for a model with seven degrees of freedoms. The skeleton curves and nonlinear normal modes are obtained. For a model with two degrees of freedoms (quarter-car) that corresponds to axisymmetric vibrations, the nonlinear normal modes are found in the case of a shock absorber with nonsmooth nonlinear characteristic
Calibrating Wide Field Surveys
NASA Astrophysics Data System (ADS)
González Fernández, Carlos; Irwin, M.; Lewis, J.; González Solares, E.
2017-09-01
"In this talk I will review the strategies in CASU to calibrate wide field surveys, in particular applied to data taken with the VISTA telescope. These include traditional night-by-night calibrations along with the search for a global, coherent calibration of all the data once observations are finished. The difficulties of obtaining photometric accuracy of a few percent and a good absolute calibration will also be discussed."
Analytical multicollimator camera calibration
Tayman, W.P.
1978-01-01
Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.
Berger, C.D.; Gupton, E.D.; Lane, B.H.; Miller, J.H.; Nichols, S.W.
1982-08-01
The ORNL Calibrations Facility is operated by the Instrumentation Group of the Industrial Safety and Applied Health Physics Division. Its primary purpose is to maintain radiation calibration standards for calibration of ORNL health physics instruments and personnel dosimeters. This report includes a discussion of the radioactive sources and ancillary equipment in use and a step-by-step procedure for calibration of those survey instruments and personnel dosimeters in routine use at ORNL.
Calibration of higher eigenmodes of cantilevers
Labuda, Aleksander; Kocun, Marta; Walsh, Tim; Meinhold, Jieh; Proksch, Tania; Meinhold, Waiman; Anderson, Caleb; Proksch, Roger; Lysy, Martin
2016-07-15
A method is presented for calibrating the higher eigenmodes (resonant modes) of atomic force microscopy cantilevers that can be performed prior to any tip-sample interaction. The method leverages recent efforts in accurately calibrating the first eigenmode by providing the higher-mode stiffness as a ratio to the first mode stiffness. A one-time calibration routine must be performed for every cantilever type to determine a power-law relationship between stiffness and frequency, which is then stored for future use on similar cantilevers. Then, future calibrations only require a measurement of the ratio of resonant frequencies and the stiffness of the first mode. This method is verified through stiffness measurements using three independent approaches: interferometric measurement, AC approach-curve calibration, and finite element analysis simulation. Power-law values for calibrating higher-mode stiffnesses are reported for several cantilever models. Once the higher-mode stiffnesses are known, the amplitude of each mode can also be calibrated from the thermal spectrum by application of the equipartition theorem.
Calibration of higher eigenmodes of cantilevers
NASA Astrophysics Data System (ADS)
Labuda, Aleksander; Kocun, Marta; Lysy, Martin; Walsh, Tim; Meinhold, Jieh; Proksch, Tania; Meinhold, Waiman; Anderson, Caleb; Proksch, Roger
2016-07-01
A method is presented for calibrating the higher eigenmodes (resonant modes) of atomic force microscopy cantilevers that can be performed prior to any tip-sample interaction. The method leverages recent efforts in accurately calibrating the first eigenmode by providing the higher-mode stiffness as a ratio to the first mode stiffness. A one-time calibration routine must be performed for every cantilever type to determine a power-law relationship between stiffness and frequency, which is then stored for future use on similar cantilevers. Then, future calibrations only require a measurement of the ratio of resonant frequencies and the stiffness of the first mode. This method is verified through stiffness measurements using three independent approaches: interferometric measurement, AC approach-curve calibration, and finite element analysis simulation. Power-law values for calibrating higher-mode stiffnesses are reported for several cantilever models. Once the higher-mode stiffnesses are known, the amplitude of each mode can also be calibrated from the thermal spectrum by application of the equipartition theorem.
Understanding signatures in hydrological calibration - A Bayesian perspective
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Fenicia, Fabrizio; Reichert, Peter; Albert, Carlo
2017-04-01
Calibration and prediction using hydrological models has received tremendous attention in the literature. Calibration based on streamflow signatures, such as flow duration curves, is of particular interest - it offers fascinating opportunities to capture hydrological characteristics of interest and to undertake calibration in data-sparse conditions. Despite its clear appeal, signature calibration requires careful development and implementation to produce meaningful results, especially if reliable uncertainty estimates are desired. This talk provides a Bayesian perspective on hydrological calibration using streamflow signatures, and its implementation using Approximate Bayesian Computation (ABC) algorithms. Following a brief theoretical expose, including the relationship to traditional calibration, we provide a series of case studies that elucidate the advantages and limitations of signature calibration under a variety of scenarios.
Calibration of a Thomson scattering diagnostic for fluctuation measurements
Stephens, H. D.; Borchardt, M. T.; Den Hartog, D. J.; Falkowski, A. F.; Holly, D. J.; O'Connell, R.; Reusch, J. A.
2008-10-15
Detailed calibrations of the Madison Symmetric Torus polychromator Thomson scattering system have been made suitable for electron temperature fluctuation measurements. All calibrations have taken place focusing on accuracy, ease of use and repeatability, and in situ measurements wherever possible. Novel calibration processes have been made possible with an insertable integrating sphere (ISIS), using an avalanche photodiode (APD) as a reference detector and optical parametric oscillator (OPO). Discussed are a novel in situ spatial calibration with the use of the ISIS, the use of an APD as a reference detector to streamline the APD calibration process, a standard dc spectral calibration, and in situ pulsed spectral calibration made possible with a combination of an OPO as a light source, the ISIS, and an APD used as a reference detector. In addition a relative quantum efficiency curve for the APDs is obtained to aid in uncertainty analysis.
Calibration of thermocouple psychrometers and moisture measurements in porous materials
NASA Astrophysics Data System (ADS)
Guz, Łukasz; Sobczuk, Henryk; Połednik, Bernard; Guz, Ewa
2016-07-01
The paper presents in situ method of peltier psychrometric sensors calibration which allow to determine water potential. Water potential can be easily recalculated into moisture content of the porous material. In order to obtain correct results of water potential, each probe should be calibrated. NaCl salt solutions with molar concentration of 0.4M, 0.7M, 1.0M and 1.4M, were used for calibration which enabled to obtain osmotic potential in range: -1791 kPa to -6487 kPa. Traditionally, the value of voltage generated on thermocouples during wet-bulb temperature depression is calculated in order to determine the calibration function for psychrometric in situ sensors. In the new method of calibration, the field under psychrometric curve along with peltier cooling current and duration was taken into consideration. During calibration, different cooling currents were applied for each salt solution, i.e. 3, 5, 8 mA respectively, as well as different cooling duration for each current (from 2 to 100 sec with 2 sec step). Afterwards, the shape of each psychrometric curve was thoroughly examined and a value of field under psychrometric curve was computed. Results of experiment indicate that there is a robust correlation between field under psychrometric curve and water potential. Calibrations formulas were designated on the basis of these features.
Pan, Congyuan; Du, Xuewei; An, Ning; Zeng, Qiang; Wang, Shengbo; Wang, Qiuping
2016-04-01
A multi-line internal standard calibration method is proposed for the quantitative analysis of carbon steel using laser-induced breakdown spectroscopy (LIBS). A procedure based on the method was adopted to select the best calibration curves and the corresponding emission lines pairs automatically. Laser-induced breakdown spectroscopy experiments with carbon steel samples were performed, and C, Cr, and Mn were analyzed via the proposed method. Calibration curves of these elements were constructed via a traditional single line internal standard calibration method and a multi-line internal standard calibration method. The calibration curves obtained were evaluated with the determination coefficient, the root mean square error of cross-validation, and the average relative error of cross-validation. All of the parameters were improved significantly with the proposed method. The results show that accurate and stable calibration curves can be obtained efficiently via the multi-line internal standard calibration method. © The Author(s) 2016.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
Bilinear modelling of cellulosic orthotropic nonlinear materials
E.P. Saliklis; T. J. Urbanik; B. Tokyay
2003-01-01
The proposed method of modelling orthotropic solids that have a nonlinear constitutive material relationship affords several advantages. The first advantage is the application of a simple bilinear stress-strain curve to represent the material response on two orthogonal axes as well as in shear, even for markedly nonlinear materials. The second advantage is that this...
Calibration of a Modified Californium Shuffler
Sadowski, E.T.; Armstrong, F.; Oldham, R.; Ceo, R.; Williams, N.
1995-06-01
A californium shuffler originally designed to assay hollow cylindrical pieces of UA1 has been modified to assay solid cylinders. Calibration standards were characterized via chemical analysis of the molten UA1 taken during casting of the standards. The melt samples yielded much more reliable characterization data than drill samples taken from standards after the standards had solidified. By normalizing one well-characterized calibration curve to several standards at different enrichments, a relatively small number of standards was required to develop an enrichment-dependent calibration. The precision of this shuffler is 0.65%, and the typical random and systematic uncertainties are 0.53% and 0.73%, respectively, for a six minute assay of an ingot containing approximately 700 grams of {sup 235}U. This paper will discuss (1) the discrepancies encountered when UA1 standards were characterized via melt samples versus drill samples, (2) a calibration methodology employing a small number of standards, and (3) a comparison of results from a previously unused shuffler with an existing shuffler. A small number of UA1 standards have been characterized using samples from the homogeneous molten state and have yielded enrichment-dependent and enrichment-independent calibration curves on two different shufflers.
ERIC Educational Resources Information Center
Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio
2017-01-01
In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.
ERIC Educational Resources Information Center
Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio
2017-01-01
In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.
ERIC Educational Resources Information Center
Nordmark, Arne; Essen, Hanno
2007-01-01
The equilibrium of a flexible inextensible string, or chain, in the centrifugal force field of a rotating reference frame is investigated. It is assumed that the end points are fixed on the rotation axis. The shape of the curve, the skipping rope curve or "troposkien", is given by the Jacobi elliptic function sn. (Contains 3 figures.)
Simulating Supernova Light Curves
Even, Wesley Paul; Dolence, Joshua C.
2016-05-05
This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.
Searcy, James Kincheon
1959-01-01
The flow-duration curve is a cumulative frequency curve that shows the percent of time specified discharges were equaled or exceeded during a given period. It combines in one curve the flow characteristics of a stream throughout the range of discharge, without regard to the sequence of occurrence. If the period upon which the curve is based represents the long-term flow of a stream, the curve may be used to predict the distribution of future flows for water- power, water-supply, and pollution studies. This report shows that differences in geology affect the low-flow ends of flow-duration curves of streams in adjacent basins. Thus, duration curves are useful in appraising the geologic characteristics of drainage basins. A method for adjusting flow-duration curves of short periods to represent long-term conditions is presented. The adjustment is made by correlating the records of a short-term station with those of a long-term station.
ERIC Educational Resources Information Center
Nordmark, Arne; Essen, Hanno
2007-01-01
The equilibrium of a flexible inextensible string, or chain, in the centrifugal force field of a rotating reference frame is investigated. It is assumed that the end points are fixed on the rotation axis. The shape of the curve, the skipping rope curve or "troposkien", is given by the Jacobi elliptic function sn. (Contains 3 figures.)
Anodic Polarization Curves Revisited
ERIC Educational Resources Information Center
Liu, Yue; Drew, Michael G. B.; Liu, Ying; Liu, Lin
2013-01-01
An experiment published in this "Journal" has been revisited and it is found that the curve pattern of the anodic polarization curve for iron repeats itself successively when the potential scan is repeated. It is surprising that this observation has not been reported previously in the literature because it immediately brings into…
Spectrophotometer spectral bandwidth calibration with absorption bands crystal standard.
Soares, O D; Costa, J L
1999-04-01
A procedure for calibration of a spectral bandwidth standard for high-resolution spectrophotometers is described. Symmetrical absorption bands for a crystal standard are adopted. The method relies on spectral band shape fitting followed by a convolution with the slit function of the spectrophotometer. A reference spectrophotometer is used to calibrate the spectral bandwidth standard. Bandwidth calibration curves for a minimum spectral transmission factor relative to the spectral bandwidth of the reference spectrophotometer are derived for the absorption bands at the wavelength of the band absorption maximum. The family of these calibration curves characterizes the spectral bandwidth standard. We calibrate the spectral bandwidth of a spectrophotometer with respect to the reference spectrophotometer by determining the spectral transmission factor minimum at every calibrated absorption band of the bandwidth standard for the nominal instrument values of the spectral bandwidth. With reference to the standard spectral bandwidth calibration curves, the relation of the spectral bandwidth to the reference spectrophotometer is determined. We determine the discrepancy in the spectrophotometers' spectral bandwidths by averaging the spectral bandwidth discrepancies relative to the standard calibrated values found at the absorption bands considered. A weighted average of the uncertainties is taken.
CURVES: curve evolution for vessel segmentation.
Lorigo, L M; Faugeras, O D; Grimson, W E; Keriven, R; Kikinis, R; Nabavi, A; Westin, C F
2001-09-01
The vasculature is of utmost importance in neurosurgery. Direct visualization of images acquired with current imaging modalities, however, cannot provide a spatial representation of small vessels. These vessels, and their branches which show considerable variations, are most important in planning and performing neurosurgical procedures. In planning they provide information on where the lesion draws its blood supply and where it drains. During surgery the vessels serve as landmarks and guidelines to the lesion. The more minute the information is, the more precise the navigation and localization of computer guided procedures. Beyond neurosurgery and neurological study, vascular information is also crucial in cardiovascular surgery, diagnosis, and research. This paper addresses the problem of automatic segmentation of complicated curvilinear structures in three-dimensional imagery, with the primary application of segmenting vasculature in magnetic resonance angiography (MRA) images. The method presented is based on recent curve and surface evolution work in the computer vision community which models the object boundary as a manifold that evolves iteratively to minimize an energy criterion. This energy criterion is based both on intensity values in the image and on local smoothness properties of the object boundary, which is the vessel wall in this application. In particular, the method handles curves evolving in 3D, in contrast with previous work that has dealt with curves in 2D and surfaces in 3D. Results are presented on cerebral and aortic MRA data as well as lung computed tomography (CT) data.
Assessment of opacimeter calibration according to International Standard Organization 10155.
Gomes, J F
2001-01-01
This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
Residual gas analyzer calibration
NASA Technical Reports Server (NTRS)
Lilienkamp, R. H.
1972-01-01
A technique which employs known gas mixtures to calibrate the residual gas analyzer (RGA) is described. The mass spectra from the RGA are recorded for each gas mixture. This mass spectra data and the mixture composition data each form a matrix. From the two matrices the calibration matrix may be computed. The matrix mathematics requires the number of calibration gas mixtures be equal to or greater than the number of gases included in the calibration. This technique was evaluated using a mathematical model of an RGA to generate the mass spectra. This model included shot noise errors in the mass spectra. Errors in the gas concentrations were also included in the valuation. The effects of these errors was studied by varying their magnitudes and comparing the resulting calibrations. Several methods of evaluating an actual calibration are presented. The effects of the number of gases in then, the composition of the calibration mixture, and the number of mixtures used are discussed.
A Comparison of Linking and Concurrent Calibration under the Graded Response Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
2002-01-01
Compared two methods for developing a common metric for the graded response model under item response theory: (1) linking separate calibration runs using equating coefficients from the characteristic curve method; and (2) concurrent calibration using the combined data of the base and target groups. Concurrent calibration yielded consistently,…
System and Method for Determining Gas Optical Density Changes in a Non-Linear Measurement Regime
NASA Technical Reports Server (NTRS)
Sachse, Glen W. (Inventor); Rana, Mauro (Inventor)
2007-01-01
Each of two sensors, positioned to simultaneously detect electromagnetic radiation absorption along a path, is calibrated to define a unique response curve associated therewith that relates a change in voltage output for each sensor to a change in optical density. A ratio-of-responses curve is defined by a ratio of the response curve associated with the first sensor to the response curve associated with the second sensor. A ratio of sensor output changes is generated using outputs from the sensors. An operating point on the ratio-of-responses curve is established using the ratio of sensor output changes. The established operating point is indicative of an optical density. When the operating point is in the non-linear response region of at least one of the sensors, the operating point and optical density corresponding thereto can be used to establish an actual response of at least one of the sensors whereby the actual sensor output can be used in determining changes in the optical density.
Calibrating Images from the MINERVA Cameras
NASA Astrophysics Data System (ADS)
Mercedes Colón, Ana
2016-01-01
The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
Development and calibration of a pedal with force and moment sensors.
Gurgel, Jonas; Porto, Flávia; Russomano, Thais; Cambraia, Rodrigo; de Azevedo, Dario F G; Glock, Flávio S; Beck, João Carlos Pinheiro; Helegda, Sergio
2006-01-01
An instrumented bicycle pedal was built and calibrated. The pedal has good linearity and sensibility, comparable to other instruments in the literature. This study aimed to perform accurate calibration of a tri-axial pedal, including forces applied, deformations, nonlinearities, hysteresis and standard error for each axis. Calibration was based on Hull and Davis method, which is based on the application of known loads on the pedal in order to create a calibration matrix.
Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry
Chen, Qiang; Xu, Hongguo; Tan, Lidong
2015-01-01
In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052
Application of composite small calibration objects in traffic accident scene photogrammetry.
Chen, Qiang; Xu, Hongguo; Tan, Lidong
2015-01-01
In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.
Calibrated Faraday Current And Magnetic Field Sensor
NASA Astrophysics Data System (ADS)
Neyer, B. T.; Chang, J.; Ruggles, L. E.
1986-01-01
We have developed a calibrated optical fiber Faraday rotation current sensor. A strong magnetic field in an optical fiber introduces circular birefringence, causing the plane of polarization of light to rotate by an amount proportional to the magnetic field. Faraday loops used in the past were nonlinear due to the stress-induced linear birefringence caused by bending the loop. This linear birefringence interfered with the Faraday rotation, yielding a complicated relationship between the current and detected light signal. We have found a way to overcome the effects of the unwanted linear birefringence and produce a calibrated current waveform. The calibration is limited only by the accurate knowledge of the Verdet constant of the optical fiber. Results of recent experiments as well as planned measurements will be presented.
NASA Astrophysics Data System (ADS)
Dias, Marcelo A.; Santangelo, Christian D.
2011-03-01
Despite an almost two thousand year history, origami, the art of folding paper, remains a challenge both artistically and scientifically. Traditionally, origami is practiced by folding along straight creases. A whole new set of shapes can be explored, however, if, instead of straight creases, one folds along arbitrary curves. We present a mechanical model for curved fold origami in which the energy of a plastically-deformed crease is balanced by the bending energy of developable regions on either side of the crease. Though geometry requires that a sheet buckle when folded along a closed curve, its shape depends on the elasticity of the sheet. NSF DMR-0846582.
Revised landsat-5 thematic mapper radiometric calibration
Chander, G.; Markham, B.L.; Barsi, J.A.
2007-01-01
Effective April 2, 2007, the radiometric calibration of Landsat-5 (L5) Thematic Mapper (TM) data that are processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) will be updated. The lifetime gain model that was implemented on May 5, 2003, for the reflective bands (1-5, 7) will be replaced by a new lifetime radiometric-calibration curve that is derived from the instrument's response to pseudoinvariant desert sites and from cross calibration with the Landsat-7 (L7) Enhanced TM Plus (ETM+). Although this calibration update applies to all archived and future L5 TM data, the principal improvements in the calibration are for the data acquired during the first eight years of the mission (1984-1991), where the changes in the instrument-gain values are as much as 15%. The radiometric scaling coefficients for bands 1 and 2 for approximately the first eight years of the mission have also been changed. Users will need to apply these new coefficients to convert the calibrated data product digital numbers to radiance. The scaling coefficients for the other bands have not changed.
ERIC Educational Resources Information Center
Muscat, Jean-Paul
1992-01-01
Uses LOGO to enhance the applicability of curve stitching in the mathematics curriculum. Presents the formulas and computer programs for the construction of parabolas, concentric circles, and epicycloids. Diagrams of constructed figures are provided. (MDH)
Crystallography on Curved Surfaces
NASA Astrophysics Data System (ADS)
Vitelli, Vincenzo; Lucks, Julius; Nelson, David
2007-03-01
We present a theoretical and numerical study of the static and dynamical properties that distinguish two dimensional curved crystals from their flat space counterparts. Experimental realizations include block copolymer mono-layers on lithographically patterned substrates and self-assembled colloidal particles on a curved interface. At the heart of our approach lies a simple observation: the packing of interacting spheres constrained to lie on a curved surface is necessarily frustrated even in the absence of defects. As a result, whenever lattice imperfections or topological defects are introduced in the curved crystal they couple to the pre-stress of geometric frustration giving rise to elastic potentials. These geometric potentials are non-local functions of the Gaussian curvature and depend on the position of the defects. They play an important role in stress relaxation dynamics, elastic instabilities and melting.
SAR calibration technology review
NASA Technical Reports Server (NTRS)
Walker, J. L.; Larson, R. W.
1981-01-01
Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.
Localized Turing patterns in nonlinear optical cavities
NASA Astrophysics Data System (ADS)
Kozyreff, G.
2012-05-01
The subcritical Turing instability is studied in two classes of models for laser-driven nonlinear optical cavities. In the first class of models, the nonlinearity is purely absorptive, with arbitrary intensity-dependent losses. In the second class, the refractive index is real and is an arbitrary function of the intracavity intensity. Through a weakly nonlinear analysis, a Ginzburg-Landau equation with quintic nonlinearity is derived. Thus, the Maxwell curve, which marks the existence of localized patterns in parameter space, is determined. In the particular case of the Lugiato-Lefever model, the analysis is continued to seventh order, yielding a refined formula for the Maxwell curve and the theoretical curve is compared with recent numerical simulation by Gomila et al. [D. Gomila, A. Scroggie, W. Firth, Bifurcation structure of dissipative solitons, Physica D 227 (2007) 70-77.
1984-11-01
the Mahalanobis distance defined in terms of t. In particular when 9 is diagonal the procedure amounts to finding the line that minimizes the weighted...the m~a~l of apj dimensional’ data set. They mhinima, the distance from the poinsa, and provide a mom-linear summary of the data. The carves awe moe...project there. The zmain theorems proms thaprincipal curves mre critical values of the expected squared distance between the points and the curve
Techniques for precise energy calibration of particle pixel detectors.
Kroupa, M; Campbell-Ricketts, T; Bahadori, A; Empl, A
2017-03-01
We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.
Techniques for precise energy calibration of particle pixel detectors
NASA Astrophysics Data System (ADS)
Kroupa, M.; Campbell-Ricketts, T.; Bahadori, A.; Empl, A.
2017-03-01
We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.
Highly curved microchannel plates
NASA Technical Reports Server (NTRS)
Siegmund, O. H. W.; Cully, S.; Warren, J.; Gaines, G. A.; Priedhorsky, W.; Bloch, J.
1990-01-01
Several spherically curved microchannel plate (MCP) stack configurations were studied as part of an ongoing astrophysical detector development program, and as part of the development of the ALEXIS satellite payload. MCP pairs with surface radii of curvature as small as 7 cm, and diameters up to 46 mm have been evaluated. The experiments show that the gain (greater than 1.5 x 10 exp 7) and background characteristics (about 0.5 events/sq cm per sec) of highly curved MCP stacks are in general equivalent to the performance achieved with flat MCP stacks of similar configuration. However, gain variations across the curved MCP's due to variations in the channel length to diameter ratio are observed. The overall pulse height distribution of a highly curved surface MCP stack (greater than 50 percent FWHM) is thus broader than its flat counterpart (less than 30 percent). Preconditioning of curved MCP stacks gives comparable results to flat MCP stacks, but it also decreases the overall gain variations. Flat fields of curved MCP stacks have the same general characteristics as flat MCP stacks.
Highly curved microchannel plates
NASA Technical Reports Server (NTRS)
Siegmund, O. H. W.; Cully, S.; Warren, J.; Gaines, G. A.; Priedhorsky, W.; Bloch, J.
1990-01-01
Several spherically curved microchannel plate (MCP) stack configurations were studied as part of an ongoing astrophysical detector development program, and as part of the development of the ALEXIS satellite payload. MCP pairs with surface radii of curvature as small as 7 cm, and diameters up to 46 mm have been evaluated. The experiments show that the gain (greater than 1.5 x 10 exp 7) and background characteristics (about 0.5 events/sq cm per sec) of highly curved MCP stacks are in general equivalent to the performance achieved with flat MCP stacks of similar configuration. However, gain variations across the curved MCP's due to variations in the channel length to diameter ratio are observed. The overall pulse height distribution of a highly curved surface MCP stack (greater than 50 percent FWHM) is thus broader than its flat counterpart (less than 30 percent). Preconditioning of curved MCP stacks gives comparable results to flat MCP stacks, but it also decreases the overall gain variations. Flat fields of curved MCP stacks have the same general characteristics as flat MCP stacks.
Influence of precompensation curves on multidimensional color modeling
NASA Astrophysics Data System (ADS)
Tuijn, Chris
1996-03-01
One of the major challenges in the prepress environment consists of controlling the electronic color reproduction process such that a perfect match of any original can be realized. Whether this goal can be reached depends on many factors such as the dynamic range of the input device (scanner, camera), the color gamut of the output device (dye sublimation printer, ink- jet printer, offset), the color management software etc. It is obvious that the reliability or, rather, the reproducibility of a particular device is of extreme importance in order to have a permanently correct color characterization. A technique which is often used to ensure this reliability is to carry out a local 1D calibration. Through this 1D calibration the particular device is brought into a reliable and generic state. Applying 1D calibration curves is not only useful to create reliable devices but can also be used to model devices more accurately, at least, if these calibration curves are carefully selected. In this article, we will discuss the overall suitability of applying 1D precompensation curves before applying colorimetric characterization. More specifically, we address problems related to the reliability of devices and the quality of the color characterization. The use of precompensation curves for calibration purposes is merely restricted to output devices. For input devices, precompensation curves are mainly used for quality purposes. Indeed, the careful selection of so-called input luts (lookup tables) is very important to have good-quality scans. In addition, we discuss how the so-called gamma curves relate to these precompensation curves for both scanners and monitors. This article is organized as follows. In the first section, we discuss the benefits of 1D precompensation curves for modeling output devices. We will cover both topics related to the calibration and the mathematical modeling of output devices. In the second section, we address several issues related to the
RF impedance measurement calibration
Matthews, P.J.; Song, J.J.
1993-02-12
The intent of this note is not to explain all of the available calibration methods in detail. Instead, we will focus on the calibration methods of interest for RF impedance coupling measurements and attempt to explain: (1). The standards and measurements necessary for the various calibration techniques. (2). The advantages and disadvantages of each technique. (3). The mathematical manipulations that need to be applied to the measured standards and devices. (4). An outline of the steps needed for writing a calibration routine that operated from a remote computer. For further details of the various techniques presented in this note, the reader should consult the references.
CALIBRATION OF PHOTOELASTIC MODULATORS IN THE VACUUM UV.
OAKBERG, T.C.; TRUNK, J.; SUTHERLAND, J.C.
2000-02-15
Measurements of circular dichroism (CD) in the UV and vacuum UV have used photoelastic modulators (PEMs) for high sensitivity (to about 10{sup -6}). While a simple technique for wavelength calibration of the PEMs has been used with good results, several features of these calibration curves have not been understood. The authors have calibrated a calcium fluoride PEM and a lithium fluoride PEM using the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory as a light source. These experiments showed calibration graphs that are linear bit do not pass through the graph origin. A second ''multiple pass'' experiment with laser light of a single wavelength, performed on the calcium fluoride PEM, demonstrates the linearity of the PEM electronics. This implies that the calibration behavior results from intrinsic physical properties of the PEM optical element material. An algorithm for generating calibration curves for calcium fluoride and lithium fluoride PEMs has been developed. The calibration curves for circular dichroism measurement for the two PEMs investigated in this study are given as examples.
Fast Field Calibration of MIMU Based on the Powell Algorithm
Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang
2014-01-01
The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801
Olson, Matthew T.; Breaud, Autumn; Harlan, Robert; Emezienna, Nkechinyere; Schools, Sabitha; Yergey, Alfred L.; Clarke, William
2014-01-01
BACKGROUND The addition of a calibration curve with every run is both time-consuming and expensive for clinical mass spectrometry assays. We present alternative calibration strategies that eliminate the need for a calibration curve except as required by laboratory regulations. METHODS We measured serum nortriptyline concentrations prospectively in 68 patients on 16 days over a 2-month period using a method employing calibration schemes that relied on the measurement of the response ratio (RR) corrected by the response factor (RF), i.e., a measurement of the RR for an equimolar mixture of the analyte and internal standard. Measurements were taken with contemporaneous RF (cRF) measurements as well as sporadic RF (sRF) measurements. The measurements with these alternative calibration schemes were compared against the clinical results obtained by interpolation on a calibration curve, and those differences were evaluated for analytical and clinical significance. RESULTS The differences between the values measured by cRF and sRF calibration and interpolation on a calibration curve were not significant (P = 0.088 and P = 0.091, respectively). Both the cRF-and sRF-based calibration results demonstrated a low mean bias against the calibration curve interpolations of 3.69% (95% CI, −15.8% to 23.2%) and 3.11% (95% CI, −16.4% to 22.6%), respectively. When these results were classified as subtherapeutic, therapeutic, or supratherapeutic, there was categorical agreement in 95.6% of the cRF calibration results and 94.1% of the sRF results. CONCLUSIONS cRF and sRF calibration in a clinically validated liquid chromatography–tandem mass spectrometry assay yields results that are analytically and clinically commensurate to those produced by interpolation with a calibration curve. PMID:23426427
Improved Z-scan adjustment to thermal nonlinearities by including nonlinear absorption
NASA Astrophysics Data System (ADS)
Severiano-Carrillo, I.; Alvarado-Méndez, E.; Trejo-Durán, M.; Méndez-Otero, M. M.
2017-08-01
We propose a modified mathematical model of thermal optical nonlinearities which allow us to obtain the nonlinear refraction index and the nonlinear absorption coefficient with only one measurement. This modification is motivated by the influence that nonlinear absorption has on the measurement of the nonlinear refraction index at far field, when the material presents a large nonlinearity. This model, where nonlinear absorption is considered to adjust the curves of nonlinear refraction index obtained by Z-scan technique, has the best agreement with experimental data. The model is validated with two ionic liquids and the organic material Eysenhardtia polystachya, in thin media. We present these results after comparing our proposed model to other reported models.
Linearization of dose-response curve of the radiochromic film dosimetry system
Devic, Slobodan; Tomic, Nada; Aldelaijan, Saad; DeBlois, Francois; Seuntjens, Jan; Chan, Maria F.; Lewis, Dave
2012-08-15
Purpose: Despite numerous advantages of radiochromic film dosimeter (high spatial resolution, near tissue equivalence, low energy dependence) to measure a relative dose distribution with film, one needs to first measure an absolute dose (following previously established reference dosimetry protocol) and then convert measured absolute dose values into relative doses. In this work, we present result of our efforts to obtain a functional form that would linearize the inherently nonlinear dose-response curve of the radiochromic film dosimetry system. Methods: Functional form [{zeta}= (-1){center_dot}netOD{sup (2/3)}/ln(netOD)] was derived from calibration curves of various previously established radiochromic film dosimetry systems. In order to test the invariance of the proposed functional form with respect to the film model used we tested it with three different GAFCHROMIC Trade-Mark-Sign film models (EBT, EBT2, and EBT3) irradiated to various doses and scanned on a same scanner. For one of the film models (EBT2), we tested the invariance of the functional form to the scanner model used by scanning irradiated film pieces with three different flatbed scanner models (Epson V700, 1680, and 10000XL). To test our hypothesis that the proposed functional argument linearizes the response of the radiochromic film dosimetry system, verification tests have been performed in clinical applications: percent depth dose measurements, IMRT quality assurance (QA), and brachytherapy QA. Results: Obtained R{sup 2} values indicate that the choice of the functional form of the new argument appropriately linearizes the dose response of the radiochromic film dosimetry system we used. The linear behavior was insensitive to both film model and flatbed scanner model used. Measured PDD values using the green channel response of the GAFCHROMIC Trade-Mark-Sign EBT3 film model are well within {+-}2% window of the local relative dose value when compared to the tabulated Cobalt-60 data. It was also
Calibration of CR-39 with monoenergetic protons
NASA Astrophysics Data System (ADS)
Xiaojiao, Duan; Xiaofei, Lan; Zhixin, Tan; Yongsheng, Huang; Shilun, Guo; Dawei, Yang; Naiyan, Wang
2009-10-01
Calibration of solid state nuclear track detector CR-39 was carried out with very low-energy monoenergetic protons of 20-100 keV from a Cockcroft Walton accelerator. To reduce the beam of the proton from the accelerator, a novel method was adopted by means of a high voltage pulse generator. The irradiation time of the proton beam on each CR-39 sheet was shortened to one pulse with duration of 100 ns, so that very separated proton tracks around 104 cm-2 can be irradiated and observed and measured on the surface of the CR-39 detector after etching. The variations of track diameter with etching time as well as with proton energy response curve has been carefully calibrated for the first time in this very low energy region. The calibration shows that the optical limit for the observation of etched tracks of protons in CR-39 is about or a little lower that 20 keV, above which the proton tracks can be seen clearly and the response curve can be used to distinguish protons from the other ions and determine the energy of the protons. The extension of response curve of protons from traditionally 20 to 100 keV in CR-39 is significant in retrieving information of protons produced in the studies of nuclear physics, plasma physics, ultrahigh intensity laser physics and laser acceleration.
Nonlinearly stacked low noise turbofan stator
NASA Technical Reports Server (NTRS)
Schuster, William B. (Inventor); Kontos, Karen B. (Inventor); Weir, Donald S. (Inventor); Nolcheff, Nick A. (Inventor); Gunaraj, John A. (Inventor)
2009-01-01
A nonlinearly stacked low noise turbofan stator vane having a characteristic curve that is characterized by a nonlinear sweep and a nonlinear lean is provided. The stator is in an axial fan or compressor turbomachinery stage that is comprised of a collection of vanes whose highly three-dimensional shape is selected to reduce rotor-stator and rotor-strut interaction noise while maintaining the aerodynamic and mechanical performance of the vane. The nonlinearly stacked low noise turbofan stator vane reduces noise associated with the fan stage of turbomachinery to improve environmental compatibility.
NASA Astrophysics Data System (ADS)
Vassiliou, Peter J.
2009-10-01
Cartan's method of moving frames is briefly recalled in the context of immersed curves in the homogeneous space of a Lie group G. The contact geometry of curves in low dimensional equi-affine geometry is then made explicit. This delivers the complete set of invariant data which solves the G-equivalence problem via a straightforward procedure, and which is, in some sense a supplement to the equivariant method of Fels and Olver. Next, the contact geometry of curves in general Riemannian manifolds (M,g) is described. For the special case in which the isometries of (M,g) act transitively, it is shown that the contact geometry provides an explicit algorithmic construction of the differential invariants for curves in M. The inputs required for the construction consist only of the metric g and a parametrisation of structure group SO(n); the group action is not required and no integration is involved. To illustrate the algorithm we explicitly construct complete sets of differential invariants for curves in the Poincaré half-space H3 and in a family of constant curvature 3-metrics. It is conjectured that similar results are possible in other Cartan geometries.
NASA Astrophysics Data System (ADS)
Kartutik, K.; Wibowo, W. E.; Pawiro, S. A.
2016-03-01
Accurate calculation of dose distribution affected by inhomogeneity tissue is required in radiotherapy planning. This study was performed to determine the ratio between radiotherapy planning using 3D-CRT, IMRT, and SBRT based on a calibrated curve of CT-number in the lung for different target's shape in 3D-CRT, IMRT, and spinal cord for SBRT. Calibration curves of CT-number were generated under measurement basis and introduced into TPS, then planning was performed for 3D-CRT, IMRT, and SBRT with 7, and 15 radiation fields. Afterwards, planning evaluation was performed by comparing the DVH curve, HI, and CI. 3D-CRT and IMRT produced the lowest HI at calibration curve of CIRS 002LFC with the value 0.24 and 10. Whereas SBRT produced the lowest HI on a linear calibration curve with a value of 0.361. The highest CI in IMRT and SBRT technique achieved using a linear calibration curve was 0.97 and 1.77 respectively. For 3D-CRT, the highest CI was obtained by using calibration curve of CIRS 062M with the value of 0.45. From the results of CI and HI, it is concluded that the calibration curve of CT-number does not significantly differ with Schneider's calibrated curve, and inverse planning gives a better result than forward planning.
Calibration facility safety plan
NASA Technical Reports Server (NTRS)
Fastie, W. G.
1971-01-01
A set of requirements is presented to insure the highest practical standard of safety for the Apollo 17 Calibration Facility in terms of identifying all critical or catastrophic type hazard areas. Plans for either counteracting or eliminating these areas are presented. All functional operations in calibrating the ultraviolet spectrometer and the testing of its components are described.
NASA Technical Reports Server (NTRS)
Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff
2011-01-01
Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI
Photogrammetric camera calibration
Tayman, W.P.; Ziemann, H.
1984-01-01
Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.
Sandia WIPP calibration traceability
Schuhen, M.D.; Dean, T.A.
1996-05-01
This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.
Leslie, Mark; Holloway, Charles A
2006-01-01
When a company launches a new product into a new market, the temptation is to immediately ramp up sales force capacity to gain customers as quickly as possible. But hiring a full sales force too early just causes the firm to burn through cash and fail to meet revenue expectations. Before it can sell an innovative product efficiently, the entire organization needs to learn how customers will acquire and use it, a process the authors call the sales learning curve. The concept of a learning curve is well understood in manufacturing. Employees transfer knowledge and experience back and forth between the production line and purchasing, manufacturing, engineering, planning, and operations. The sales learning curve unfolds similarly through the give-and-take between the company--marketing, sales, product support, and product development--and its customers. As customers adopt the product, the firm modifies both the offering and the processes associated with making and selling it. Progress along the manufacturing curve is measured by tracking cost per unit: The more a firm learns about the manufacturing process, the more efficient it becomes, and the lower the unit cost goes. Progress along the sales learning curve is measured in an analogous way: The more a company learns about the sales process, the more efficient it becomes at selling, and the higher the sales yield. As the sales yield increases, the sales learning process unfolds in three distinct phases--initiation, transition, and execution. Each phase requires a different size--and kind--of sales force and represents a different stage in a company's production, marketing, and sales strategies. Adjusting those strategies as the firm progresses along the sales learning curve allows managers to plan resource allocation more accurately, set appropriate expectations, avoid disastrous cash shortfalls, and reduce both the time and money required to turn a profit.
Escudero, Carlos
2009-08-15
Stochastic growth phenomena on curved interfaces are studied by means of stochastic partial differential equations. These are derived as counterparts of linear planar equations on a curved geometry after a reparametrization invariance principle has been applied. We examine differences and similarities with the classical planar equations. Some characteristic features are the loss of correlation through time and a particular behavior of the average fluctuations. Dependence on the metric is also explored. The diffusive model that propagates correlations ballistically in the planar situation is particularly interesting, as this propagation becomes nonuniversal in the new regime.
Calibration method for spectroscopic systems
Sandison, David R.
1998-01-01
Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.
Calibration method for spectroscopic systems
Sandison, D.R.
1998-11-17
Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.
A Test Characteristic Curve Linking Method for the Testlet Model
ERIC Educational Resources Information Center
Li, Yanmei; Bolt, Daniel M.; Fu, Jianbin
2005-01-01
When tests are made up of testlets, a testlet-based item response theory (IRT) model may be used to account for local dependence among items from a common testlet. This study presents a new test characteristic curve method to link calibrations based on the Bradlow, Wainer, and Wang (1999) testlet model. Procedures for calculating the test…
Nonlinear coherent destruction of tunneling
Luo Xiaobing; Xie Qiongtao; Wu Biao
2007-11-15
We study theoretically two coupled periodically curved optical waveguides with Kerr nonlinearity. We find that the tunneling between the waveguides can be suppressed in a wide range of parameters. This suppression of tunneling is found to be related to the coherent destruction of tunneling in a linear medium, which in contrast occurs only at isolated parameter points. Therefore, we call this suppression nonlinear coherent destruction of tunneling. This localization phenomenon can be observed readily with current experimental capability; it may also be observable in a different physical system, the Bose-Einstein condensate.
Forward model nonlinearity versus inverse model nonlinearity
Mehl, S.
2007-01-01
The issue of concern is the impact of forward model nonlinearity on the nonlinearity of the inverse model. The question posed is, "Does increased nonlinearity in the head solution (forward model) always result in increased nonlinearity in the inverse solution (estimation of hydraulic conductivity)?" It is shown that the two nonlinearities are separate, and it is not universally true that increased forward model nonlinearity increases inverse model nonlinearity. ?? 2007 National Ground Water Association.
In, Visarath; Longhini, Patrick; Kho, Andy; Neff, Joseph D; Leung, Daniel; Liu, Norman; Meadows, Brian K; Gordon, Frank; Bulsara, Adi R; Palacios, Antonio
2012-12-01
The nonlinear channelizer is an integrated circuit made up of large parallel arrays of analog nonlinear oscillators, which, collectively, serve as a broad-spectrum analyzer with the ability to receive complex signals containing multiple frequencies and instantaneously lock-on or respond to a received signal in a few oscillation cycles. The concept is based on the generation of internal oscillations in coupled nonlinear systems that do not normally oscillate in the absence of coupling. In particular, the system consists of unidirectionally coupled bistable nonlinear elements, where the frequency and other dynamical characteristics of the emergent oscillations depend on the system's internal parameters and the received signal. These properties and characteristics are being employed to develop a system capable of locking onto any arbitrary input radio frequency signal. The system is efficient by eliminating the need for high-speed, high-accuracy analog-to-digital converters, and compact by making use of nonlinear coupled systems to act as a channelizer (frequency binning and channeling), a low noise amplifier, and a frequency down-converter in a single step which, in turn, will reduce the size, weight, power, and cost of the entire communication system. This paper covers the theory, numerical simulations, and some engineering details that validate the concept at the frequency band of 1-4 GHz.
NASA Astrophysics Data System (ADS)
In, Visarath; Longhini, Patrick; Kho, Andy; Neff, Joseph D.; Leung, Daniel; Liu, Norman; Meadows, Brian K.; Gordon, Frank; Bulsara, Adi R.; Palacios, Antonio
2012-12-01
The nonlinear channelizer is an integrated circuit made up of large parallel arrays of analog nonlinear oscillators, which, collectively, serve as a broad-spectrum analyzer with the ability to receive complex signals containing multiple frequencies and instantaneously lock-on or respond to a received signal in a few oscillation cycles. The concept is based on the generation of internal oscillations in coupled nonlinear systems that do not normally oscillate in the absence of coupling. In particular, the system consists of unidirectionally coupled bistable nonlinear elements, where the frequency and other dynamical characteristics of the emergent oscillations depend on the system's internal parameters and the received signal. These properties and characteristics are being employed to develop a system capable of locking onto any arbitrary input radio frequency signal. The system is efficient by eliminating the need for high-speed, high-accuracy analog-to-digital converters, and compact by making use of nonlinear coupled systems to act as a channelizer (frequency binning and channeling), a low noise amplifier, and a frequency down-converter in a single step which, in turn, will reduce the size, weight, power, and cost of the entire communication system. This paper covers the theory, numerical simulations, and some engineering details that validate the concept at the frequency band of 1-4 GHz.
Calibration-free optical chemical sensors
DeGrandpre, Michael D.
2006-04-11
An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO_{2 }(partial pressure of CO_{2}) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO_{2 }sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.
Global calibration of unleveled theodolite using angular distance constraints.
Zheng, Xuehan; Wei, Zhenzhong; Zhang, Guangjun
2016-11-20
The theodolite is an important optical measurement instrument in application. Its global calibration, including position and orientation, is a prerequisite for measurement. Most global calibration methods require the theodolite to be leveled precisely, which is time-consuming and susceptible to error. We propose a global calibration method without leveling: it solves position results using the angular distance of control points by nonlinear optimization and then computes orientation parameters (rotation matrix) linearly based on position results. Furthermore, global calibration of multiple theodolites is also introduced. In addition, we introduced a method that can compute the dip direction and tilt angle by decomposing the rotation matrix. We evaluate the calibration algorithms on both computer simulation and real data experiments, demonstrating the effectiveness of the techniques.
NASA Astrophysics Data System (ADS)
Hodge, Philip E.; Kaiser, M. E.; Keyes, C. D.; Ake, T. B.; Aloisi, A.; Friedman, S. D.; Oliveira, C. M.; Shaw, B.; Sahnow, D. J.; Penton, S. V.; Froning, C. S.; Beland, S.; Osterman, S.; Green, J.; COS/STIS STScI Team; IDT, COS
2008-05-01
The Cosmic Origins Spectrograph, COS, (Green, J, et al., 2000, Proc SPIE, 4013) will be installed in the Hubble Space Telescope (HST) during the next servicing mission. This will be the most sensitive ultraviolet spectrograph ever flown aboard HST. The program (CALCOS) for pipeline calibration of HST/COS data has been developed by the Space Telescope Science Institute. As with other HST pipelines, CALCOS uses an association table to list the data files to be included, and it employs header keywords to specify the calibration steps to be performed and the reference files to be used. COS includes both a cross delay line detector for the far ultraviolet (FUV) and a MAMA detector for the near ultraviolet (NUV). CALCOS uses a common structure for both channels, but the specific calibration steps differ. The calibration steps include pulse-height filtering and geometric correction for FUV, and flat-field, deadtime, and Doppler correction for both detectors. A 1-D spectrum will be extracted and flux calibrated. Data will normally be taken in TIME-TAG mode, recording the time and location of each detected photon, although ACCUM mode will also be supported. The wavelength calibration uses an on-board spectral line lamp. To enable precise wavelength calibration, default operations will simultaneously record the science target and lamp spectrum by executing brief (tag-flash) lamp exposures at least once per external target exposure.
ERIC Educational Resources Information Center
Horton, Dawn M.
2001-01-01
This article reviews the history of the bell curve and its application to gifted education and suggests rejection of this paradigm in favor of a focus on criteria rather than norms and a better understanding of the distribution and structure of intelligence. (Contains references.) (DB)
ERIC Educational Resources Information Center
Bausell, R. Barker
1995-01-01
This editorial provides an informal review of "The Bell Curve" (Herrnstein and Murray, 1994). The book, packaged as scientific writing, is an attack on affirmative action and on government attempts to foster egalitarianism. It is a political treatise that assumes that racial differences in intelligence are valid and genetic. (SLD)
ERIC Educational Resources Information Center
Lawes, Jonathan F.
2013-01-01
Graphing polar curves typically involves a combination of three traditional techniques, all of which can be time-consuming and tedious. However, an alternative method--graphing the polar function on a rectangular plane--simplifies graphing, increases student understanding of the polar coordinate system, and reinforces graphing techniques learned…
Textbook Factor Demand Curves.
ERIC Educational Resources Information Center
Davis, Joe C.
1994-01-01
Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)
ERIC Educational Resources Information Center
Harper, Suzanne R.; Driskell, Shannon
2005-01-01
Graphic tips for using the Geometer's Sketchpad (GSP) are described. The methods to import an image into GSP, define a coordinate system, plot points and curve fit the function using a graphical calculator are demonstrated where the graphic features of GSP allow teachers to expand the use of the technology application beyond the classroom.
ERIC Educational Resources Information Center
Harper, Suzanne R.; Driskell, Shannon
2005-01-01
Graphic tips for using the Geometer's Sketchpad (GSP) are described. The methods to import an image into GSP, define a coordinate system, plot points and curve fit the function using a graphical calculator are demonstrated where the graphic features of GSP allow teachers to expand the use of the technology application beyond the classroom.
Textbook Factor Demand Curves.
ERIC Educational Resources Information Center
Davis, Joe C.
1994-01-01
Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)
ERIC Educational Resources Information Center
Hunter, Walter M.
This document contains detailed directions for constructing a device that mechanically produces the three-dimensional shape resulting from the rotation of any algebraic line or curve around either axis on the coordinate plant. The device was developed in response to student difficulty in visualizing, and thus grasping the mathematical principles…
Straightening Out Learning Curves
ERIC Educational Resources Information Center
Corlett, E. N.; Morecombe, V. J.
1970-01-01
The basic mathematical theory behind learning curves is explained, together with implications for clerical and industrial training, evaluation of skill development, and prediction of future performance. Brief studies of textile worker and typist training are presented to illustrate such concepts as the reduction fraction (a consistent decrease in…
ERIC Educational Resources Information Center
Paulton, Richard J. L.
1991-01-01
A procedure that allows students to view an entire bacterial growth curve during a two- to three-hour student laboratory period is described. Observations of the lag phase, logarithmic phase, maximum stationary phase, and phase of decline are possible. A nonpathogenic, marine bacterium is used in the investigation. (KR)
Nonlinear acoustic propagation in two-dimensional ducts
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Tsai, M.-S.
1974-01-01
The method of multiple scales is used to obtain a second-order uniformly valid expansion for the nonlinear acoustic wave propagation in a two-dimensional duct whose walls are treated with a nonlinear acoustic material. The wave propagation in the duct is characterized by the unsteady nonlinear Euler equations. The results show that nonlinear effects tend to flatten and broaden the absorption versus frequency curve, in qualitative agreement with the experimental observations. Moreover, the effect of the gas nonlinearity increases with increasing sound frequency, whereas the effect of the material nonlinearity decreases with increasing sound frequency.
DIRBE External Calibrator (DEC)
NASA Technical Reports Server (NTRS)
Wyatt, Clair L.; Thurgood, V. Alan; Allred, Glenn D.
1987-01-01
Under NASA Contract No. NAS5-28185, the Center for Space Engineering at Utah State University has produced a calibration instrument for the Diffuse Infrared Background Experiment (DIRBE). DIRBE is one of the instruments aboard the Cosmic Background Experiment Observatory (COBE). The calibration instrument is referred to as the DEC (Dirbe External Calibrator). DEC produces a steerable, infrared beam of controlled spectral content and intensity and with selectable point source or diffuse source characteristics, that can be directed into the DIRBE to map fields and determine response characteristics. This report discusses the design of the DEC instrument, its operation and characteristics, and provides an analysis of the systems capabilities and performance.
Airdata Measurement and Calibration
NASA Technical Reports Server (NTRS)
Haering, Edward A., Jr.
1995-01-01
This memorandum provides a brief introduction to airdata measurement and calibration. Readers will learn about typical test objectives, quantities to measure, and flight maneuvers and operations for calibration. The memorandum informs readers about tower-flyby, trailing cone, pacer, radar-tracking, and dynamic airdata calibration maneuvers. Readers will also begin to understand how some data analysis considerations and special airdata cases, including high-angle-of-attack flight, high-speed flight, and nonobtrusive sensors are handled. This memorandum is not intended to be all inclusive; this paper contains extensive reference and bibliography sections.
Dynamic Pressure Calibration Standard
NASA Technical Reports Server (NTRS)
Schutte, P. C.; Cate, K. H.; Young, S. D.
1986-01-01
Vibrating columns of fluid used to calibrate transducers. Dynamic pressure calibration standard developed for calibrating flush diaphragm-mounted pressure transducers. Pressures up to 20 kPa (3 psi) accurately generated over frequency range of 50 to 1,800 Hz. System includes two conically shaped aluminum columns one 5 cm (2 in.) high for low pressures and another 11 cm (4.3 in.) high for higher pressures, each filled with viscous fluid. Each column mounted on armature of vibration exciter, which imparts sinusoidally varying acceleration to fluid column. Signal noise low, and waveform highly dependent on quality of drive signal in vibration exciter.
NASA Astrophysics Data System (ADS)
Pappalardo, Gelsomina; Freudenthaler, Volker; Nicolae, Doina; Mona, Lucia; Belegante, Livio; D'Amico, Giuseppe
2016-06-01
This paper presents the newly established Lidar Calibration Centre, a distributed infrastructure in Europe, whose goal is to offer services for complete characterization and calibration of lidars and ceilometers. Mobile reference lidars, laboratories for testing and characterization of optics and electronics, facilities for inspection and debugging of instruments, as well as for training in good practices are open to users from the scientific community, operational services and private sector. The Lidar Calibration Centre offers support for trans-national access through the EC HORIZON2020 project ACTRIS-2.
Compact radiometric microwave calibrator
Fixsen, D. J.; Wollack, E. J.; Kogut, A.; Limon, M.; Mirel, P.; Singal, J.; Fixsen, S. M.
2006-06-15
The calibration methods for the ARCADE II instrument are described and the accuracy estimated. The Steelcast coated aluminum cones which comprise the calibrator have a low reflection while maintaining 94% of the absorber volume within 5 mK of the base temperature (modeled). The calibrator demonstrates an absorber with the active part less than one wavelength thick and only marginally larger than the mouth of the largest horn and yet black (less than -40 dB or 0.01% reflection) over five octaves in frequency.
Method for calibration accuracy improvement of projector-camera-based structured light system
NASA Astrophysics Data System (ADS)
Nie, Lei; Ye, Yuping; Song, Zhan
2017-07-01
Calibration is a critical step for the projector-camera-based structured light system (SLS). Conventional SLS calibration means usually use the calibrated camera to calibrate the projector device, and the optimization of calibration parameters is applied to minimize the two-dimensional (2-D) reprojection errors. A three-dimensional (3-D)-based method is proposed for the optimization of SLS calibration parameters. The system is first calibrated with traditional calibration methods to obtain the primary calibration parameters. Then, a reference plane with some precisely printed markers is used for the optimization of primary calibration results. Three metric error criteria are introduced to evaluate the 3-D reconstruction accuracy of the reference plane. By treating all the system parameters as a global optimization problem and using the primary calibration parameters as initial values, a nonlinear multiobjective optimization problem can be established and solved. Compared with conventional calibration methods that adopt the 2-D reprojection errors for the camera and projector separately, a global optimal calibration result can be obtained by the proposed calibration procedure. Experimental results showed that, with the optimized calibration parameters, measurement accuracy and 3-D reconstruction quality of the system can be greatly improved.
Marine04 Marine radiocarbon age calibration, 26 ? 0 ka BP
Hughen, K; Baille, M; Bard, E; Beck, J; Bertrand, C; Blackwell, P; Buck, C; Burr, G; Cutler, K; Damon, P; Edwards, R; Fairbanks, R; Friedrich, M; Guilderson, T; Kromer, B; McCormac, F; Manning, S; Bronk-Ramsey, C; Reimer, P; Reimer, R; Remmele, S; Southon, J; Stuiver, M; Talamo, S; Taylor, F; der Plicht, J v; Weyhenmeyer, C
2004-11-01
New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration datasets extend an additional 2000 years, from 0-26 ka cal BP (Before Present, 0 cal BP = AD 1950), and provide much higher resolution, greater precision and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically dated tree-ring samples, converted with a box-diffusion model to marine mixed-layer ages, cover the period from 0-10.5 ka cal BP. Beyond 10.5 ka cal BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals. The marine records are corrected with site-specific {sup 14}C reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 ka cal BP. A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the radiocarbon age to calculate the underlying calibration curve. The marine datasets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring datasets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al.
Calibration Fixture For Anemometer Probes
NASA Technical Reports Server (NTRS)
Lewis, Charles R.; Nagel, Robert T.
1993-01-01
Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.
Calibration Fixture For Anemometer Probes
NASA Technical Reports Server (NTRS)
Lewis, Charles R.; Nagel, Robert T.
1993-01-01
Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.
Progress Report for Adapting APASS Data Releases for the Calibration of Harvard Plates
NASA Astrophysics Data System (ADS)
Los, E. J.
2012-06-01
The Digital Access to a Sky Century @ Harvard (DASCH) has scanned over 19,000 plates and developed a pipeline to calibrate these plates using existing photometric catalogues. This paper presents preliminary results from the use of the AAVSO Photometric All Sky Survey (APASS) catalogue releases DR1, DR2, and DR3 for DASCH plate calibration. In the optimum magnitude 10-12 range of the DASCH patrol plates, the median light curve RMS with APASS calibration is 0.10-0.12 magnitude, an improvement from the 0.1--0.15 magnitude median light curve RMS with GSC 2.2.3 calibration.
Diffusion Geometry Based Nonlinear Methods for Hyperspectral Change Detection
2010-05-12
automatically independent components of the spectrum building an empirical model of the constituents of the scene. It is precisely through this model that...This method enables coherent analysis of data from a multiplicity of sources generalizing signal processing to a nonlinear setting. By building ...local whitening to build a global explicit parameterization invariant under nonlinear perturbations of the spectrum, calibrating the data
Relativistic electron in curved magnetic fields
NASA Technical Reports Server (NTRS)
An, S.
1985-01-01
Making use of the perturbation method based on the nonlinear differential equation theory, the author investigates the classical motion of a relativistic electron in a class of curved magnetic fields which may be written as B=B(O,B sub phi, O) in cylindrical coordinates (R. phi, Z). Under general astrophysical conditions the author derives the analytical expressions of the motion orbit, pitch angle, etc., of the electron in their dependence upon parameters characterizing the magnetic field and electron. The effects of non-zero curvature of magnetic field lines on the motion of electrons and applicabilities of these results to astrophysics are also discussed.
Roundness calibration standard
Burrus, Brice M.
1984-01-01
A roundness calibration standard is provided with a first arc constituting the major portion of a circle and a second arc lying between the remainder of the circle and the chord extending between the ends of said first arc.
NASA Technical Reports Server (NTRS)
Soli, G. A.; Blaes, B. R.; Beuhler, M. G.
1994-01-01
Custom proton sensitive SRAM chips are being flown on the BMDO Clementine missions and Space Technology Research Vehicle experiments. This paper describes the calibration procedure for the SRAM proton detectors and their response to the space environment.
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Meteorological radar calibration
NASA Technical Reports Server (NTRS)
Hodge, D. B.
1978-01-01
A meteorological radar calibration technique is developed. It is found that the integrated, range corrected, received power saturates under intense rain conditions in a manner analogous to that encountered for the radiometric path temperature. Furthermore, it is found that this saturation condition establishes a bound which may be used to determine an absolution radar calibration for the case of radars operating at attenuating wavelengths. In the case of less intense rainfall or for radars at nonattenuating wavelengths, the relationship for direct calibration in terms of an independent measurement of radiometric path temperature is developed. This approach offers the advantage that the calibration is in terms of an independent measurement of the rainfall through the same elevated region as that viewed by the radar.
Traceable Pyrgeometer Calibrations
Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina; Webb, Craig
2016-05-02
This presentation provides a high-level overview of the progress on the Broadband Outdoor Radiometer Calibrations for all shortwave and longwave radiometers that are deployed by the Atmospheric Radiation Measurement program.
2010-01-01
Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2.) reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides. PMID:20594322
Scanner calibration revisited.
Pozhitkov, Alexander E
2010-07-01
Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2.) reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.
A simplified approach to calibrating [sup 14]C dates
Talma, A.S.; Vogel, J.C. )
1993-01-01
The authors propose a simplified approach to the calibration of radiocarbon dates. They use splines through the tree-ring data as calibration curves, thereby eliminating a large part of the statistical scatter of the actual data points. To express the age range, they transform the [plus minus]1 [sigma] and [plus minus]2 [sigma] values of the BP age to calendar dates and interpret them as the 68% and 95% confidence intervals. This approach by-passes the conceptual problems of the transfer of individual probability values from the radiocarbon to the calendar age. They have adapted software to make this calibration possible.
A simple method to calibrate intensities of photographic slit spectrograms
NASA Astrophysics Data System (ADS)
Vogt, N.; Barrera, L. H.
1985-07-01
A wavelength-dependent intensity calibration of photographic spectrograms can be obtained through the spectrograph without any additional equipment beyond a simple neutral density filter of known transparency. This filter is introduced in the focal plane of the telescope covering part of the spectrograph slit. Exposure of the comparison lamps through the entire slit yields a calibration plate which shows a well defined density jump within each line. From the height of this jump (for many lines of widely ranging strengths) the characteristic curve can be derived. The method is described and compared to the classical calibration method with a tube sensitometer.
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3...-grade air. (3) Calibrate on each normally used operating range with CH4 in air with nominal... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation...
A stochastic approximation method for assigning values to calibrators.
Schlain, B
1998-04-01
A new procedure is provided for transferring analyte concentration values from a reference material to production calibrators. This method is robust to calibration curve-fitting errors and can be accomplished using only one instrument and one set of reagents. An easily implemented stochastic approximation algorithm iteratively finds the appropriate analyte level of a standard prepared from a reference material that will yield the same average signal response as the new production calibrator. Alternatively, a production bulk calibrator material can be iteratively adjusted to give the same average signal response as some prespecified, fixed reference standard. In either case, the outputted value assignment of the production calibrator is the analyte concentration of the reference standard in the final iteration of the algorithm. Sample sizes are statistically determined as functions of known within-run signal response precisions and user-specified accuracy tolerances.
Boyer, H.E.
1986-01-01
This Atlas was developed to serve engineers who are looking for fatigue data on a particular metal or alloy. Having these curves compiled in a single book will also facilitate the computerization of the involved data. It is pointed out that plans are under way to make the data in this book available in ASCII files for analysis by computer programs. S-N curves which typify effects of major variables are considered along with low-carbon steels, medium-carbon steels, alloy steels, HSLA steels, high-strength alloy steels, heat-resisting steels, stainless steels, maraging steels, cast irons, and heat-resisting alloys. Attention is also given to aluminum alloys, copper alloys, magnesium alloys, molybdenum, tin alloys, titanium and titanium alloys, zirconium, steel castings, closed-die forgings, powder metallurgy parts, composites, effects of surface treatments, and test results for component parts.
NASA Astrophysics Data System (ADS)
Brandenburg, J. P.
2013-08-01
Fault-propagation folds form an important trapping element in both onshore and offshore fold-thrust belts, and as such benefit from reliable interpretation. Building an accurate geologic interpretation of such structures requires palinspastic restorations, which are made more challenging by the interplay between folding and faulting. Trishear (Erslev, 1991; Allmendinger, 1998) is a useful tool to unravel this relationship kinematically, but is limited by a restriction to planar fault geometries, or at least planar fault segments. Here, new methods are presented for trishear along continuously curved reverse faults defining a flat-ramp transition. In these methods, rotation of the hanging wall above a curved fault is coupled to translation along a horizontal detachment. Including hanging wall rotation allows for investigation of structures with progressive backlimb rotation. Application of the new algorithms are shown for two fault-propagation fold structures: the Turner Valley Anticline in Southwestern Alberta, and the Alpha Structure in the Niger Delta.
ERIC Educational Resources Information Center
Seider, Warren D.; Ungar, Lyle H.
1987-01-01
Describes a course in nonlinear mathematics courses offered at the University of Pennsylvania which provides an opportunity for students to examine the complex solution spaces that chemical engineers encounter. Topics include modeling many chemical processes, especially those involving reaction and diffusion, auto catalytic reactions, phase…
NASA Astrophysics Data System (ADS)
Kevorkian, J.
This report discusses research in the area of slowly varying nonlinear oscillatory systems. Some of the topics discussed are as follows: adiabatic invariants and transient resonance in very slowly varying Hamiltonian systems; sustained resonance in very slowly varying Hamiltonian systems; free-electron lasers with very slow wiggler taper; and bursting oscillators.
ERIC Educational Resources Information Center
Seider, Warren D.; Ungar, Lyle H.
1987-01-01
Describes a course in nonlinear mathematics courses offered at the University of Pennsylvania which provides an opportunity for students to examine the complex solution spaces that chemical engineers encounter. Topics include modeling many chemical processes, especially those involving reaction and diffusion, auto catalytic reactions, phase…
Simple Chaotic Flows with a Curve of Equilibria
NASA Astrophysics Data System (ADS)
Barati, Kosar; Jafari, Sajad; Sprott, Julien Clinton; Pham, Viet-Thanh
Using a systematic computer search, four simple chaotic flows with cubic nonlinearities were found that have the unusual feature of having a curve of equilibria. Such systems belong to a newly introduced category of chaotic systems with hidden attractors that are important and potentially problematic in engineering applications.
Symmetries for Galileons and DBI scalars on curved space
Goon, Garrett; Hinterbichler, Kurt; Trodden, Mark
2011-07-08
We introduced a general class of four-dimensional effective field theories which include curved space Galileons and DBI theories possessing nonlinear shift-like symmetries. These effective theories arise from purely gravitational actions and may prove relevant to the cosmology of both the early and late universe.
Symmetries for Galileons and DBI scalars on curved space
Goon, Garrett; Hinterbichler, Kurt; Trodden, Mark
2011-07-08
We introduced a general class of four-dimensional effective field theories which include curved space Galileons and DBI theories possessing nonlinear shift-like symmetries. These effective theories arise from purely gravitational actions and may prove relevant to the cosmology of both the early and late universe.
NASA Astrophysics Data System (ADS)
Vo, Martin
2017-08-01
Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.
Schulz, Douglas A.
2007-10-08
A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classied and used to develop classication histograms, which are representative of an individual's typical mouse use. These classication histograms can then be compared to validate identity. This classication approach is suitable for providing continuous identity validation during an entire user session.
Torello, David; Kim, Jin-Yeon; Qu, Jianmin; Jacobs, Laurence J.
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.
NASA Astrophysics Data System (ADS)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin; Jacobs, Laurence J.
2015-03-01
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β11 is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β117075/β112024 measure of 1.363 agrees well with previous literature and earlier work.
NASA Astrophysics Data System (ADS)
Frønsdal, Christian; Kontsevich, Maxim
2007-02-01
Deformation quantization on varieties with singularities offers perspectives that are not found on manifolds. The Harrison component of Hochschild cohomology, vanishing on smooth manifolds, reflects information about singularities. The Harrison 2-cochains are symmetric and are interpreted in terms of abelian *-products. This paper begins a study of abelian quantization on plane curves over mathbb{C}, being algebraic varieties of the form {mathbb{C}}^2/R, where R is a polynomial in two variables; that is, abelian deformations of the coordinate algebra mathbb{C}[x,y]/(R). To understand the connection between the singularities of a variety and cohomology we determine the algebraic Hochschild (co)homology and its Barr Gerstenhaber Schack decomposition. Homology is the same for all plane curves mathbb{C}[x,y]/R, but the cohomology depends on the local algebra of the singularity of R at the origin. The Appendix, by Maxim Kontsevich, explains in modern mathematical language a way to calculate Hochschild and Harrison cohomology groups for algebras of functions on singular planar curves etc. based on Koszul resolutions.
DC-offset-free homodyne interferometer and its nonlinearity compensation.
Hu, Pengcheng; Zhu, Jinghao; Zhai, Xiaoyu; Tan, JiuBin
2015-04-06
This study presents an analysis of the cyclic nonlinearity in the homodyne interferometer starting from the interference principle. We present the design for an enhanced homodyne interferometer without DC offset, for which the nonlinearity model will not be influenced by the intensity of the measurement beam. Our experimental results show that the enhanced interferometer can suppress the nonlinearity to less than 0.5 nm with a system calibration involving gain adjustment and phase-correction methods.
Invited Article: Deep Impact instrument calibration
Klaasen, Kenneth P.; Mastrodemos, Nickolaos; A'Hearn, Michael F.; Farnham, Tony; Groussin, Olivier; Ipatov, Sergei; Li Jianyang; McLaughlin, Stephanie; Sunshine, Jessica; Wellnitz, Dennis; Baca, Michael; Delamere, Alan; Desnoyer, Mark; Thomas, Peter; Hampton, Donald; Lisse, Carey
2008-09-15
Calibration of NASA's Deep Impact spacecraft instruments allows reliable scientific interpretation of the images and spectra returned from comet Tempel 1. Calibrations of the four onboard remote sensing imaging instruments have been performed in the areas of geometric calibration, spatial resolution, spectral resolution, and radiometric response. Error sources such as noise (random, coherent, encoding, data compression), detector readout artifacts, scattered light, and radiation interactions have been quantified. The point spread functions (PSFs) of the medium resolution instrument and its twin impactor targeting sensor are near the theoretical minimum [{approx}1.7 pixels full width at half maximum (FWHM)]. However, the high resolution instrument camera was found to be out of focus with a PSF FWHM of {approx}9 pixels. The charge coupled device (CCD) read noise is {approx}1 DN. Electrical cross-talk between the CCD detector quadrants is correctable to <2 DN. The IR spectrometer response nonlinearity is correctable to {approx}1%. Spectrometer read noise is {approx}2 DN. The variation in zero-exposure signal level with time and spectrometer temperature is not fully characterized; currently corrections are good to {approx}10 DN at best. Wavelength mapping onto the detector is known within 1 pixel; spectral lines have a FWHM of {approx}2 pixels. About 1% of the IR detector pixels behave badly and remain uncalibrated. The spectrometer exhibits a faint ghost image from reflection off a beamsplitter. Instrument absolute radiometric calibration accuracies were determined generally to <10% using star imaging. Flat-field calibration reduces pixel-to-pixel response differences to {approx}0.5% for the cameras and <2% for the spectrometer. A standard calibration image processing pipeline is used to produce archival image files for analysis by researchers.
Invited Article: Deep Impact instrument calibration.
Klaasen, Kenneth P; A'Hearn, Michael F; Baca, Michael; Delamere, Alan; Desnoyer, Mark; Farnham, Tony; Groussin, Olivier; Hampton, Donald; Ipatov, Sergei; Li, Jianyang; Lisse, Carey; Mastrodemos, Nickolaos; McLaughlin, Stephanie; Sunshine, Jessica; Thomas, Peter; Wellnitz, Dennis
2008-09-01
Calibration of NASA's Deep Impact spacecraft instruments allows reliable scientific interpretation of the images and spectra returned from comet Tempel 1. Calibrations of the four onboard remote sensing imaging instruments have been performed in the areas of geometric calibration, spatial resolution, spectral resolution, and radiometric response. Error sources such as noise (random, coherent, encoding, data compression), detector readout artifacts, scattered light, and radiation interactions have been quantified. The point spread functions (PSFs) of the medium resolution instrument and its twin impactor targeting sensor are near the theoretical minimum [ approximately 1.7 pixels full width at half maximum (FWHM)]. However, the high resolution instrument camera was found to be out of focus with a PSF FWHM of approximately 9 pixels. The charge coupled device (CCD) read noise is approximately 1 DN. Electrical cross-talk between the CCD detector quadrants is correctable to <2 DN. The IR spectrometer response nonlinearity is correctable to approximately 1%. Spectrometer read noise is approximately 2 DN. The variation in zero-exposure signal level with time and spectrometer temperature is not fully characterized; currently corrections are good to approximately 10 DN at best. Wavelength mapping onto the detector is known within 1 pixel; spectral lines have a FWHM of approximately 2 pixels. About 1% of the IR detector pixels behave badly and remain uncalibrated. The spectrometer exhibits a faint ghost image from reflection off a beamsplitter. Instrument absolute radiometric calibration accuracies were determined generally to <10% using star imaging. Flat-field calibration reduces pixel-to-pixel response differences to approximately 0.5% for the cameras and <2% for the spectrometer. A standard calibration image processing pipeline is used to produce archival image files for analysis by researchers.
Curved mesh generation and mesh refinement using Lagrangian solid mechanics
Persson, P.-O.; Peraire, J.
2008-12-31
We propose a method for generating well-shaped curved unstructured meshes using a nonlinear elasticity analogy. The geometry of the domain to be meshed is represented as an elastic solid. The undeformed geometry is the initial mesh of linear triangular or tetrahedral elements. The external loading results from prescribing a boundary displacement to be that of the curved geometry, and the final configuration is determined by solving for the equilibrium configuration. The deformations are represented using piecewise polynomials within each element of the original mesh. When the mesh is sufficiently fine to resolve the solid deformation, this method guarantees non-intersecting elements even for highly distorted or anisotropic initial meshes. We describe the method and the solution procedures, and we show a number of examples of two and three dimensional simplex meshes with curved boundaries. We also demonstrate how to use the technique for local refinement of non-curved meshes in the presence of curved boundaries.
Nonlinear Acoustical Assessment of Precipitate Nucleation
NASA Technical Reports Server (NTRS)
Cantrell, John H.; Yost, William T.
2004-01-01
The purpose of the present work is to show that measurements of the acoustic nonlinearity parameter in heat treatable alloys as a function of heat treatment time can provide quantitative information about the kinetics of precipitate nucleation and growth in such alloys. Generally, information on the kinetics of phase transformations is obtained from time-sequenced electron microscopical examination and differential scanning microcalorimetry. The present nonlinear acoustical assessment of precipitation kinetics is based on the development of a multiparameter analytical model of the effects on the nonlinearity parameter of precipitate nucleation and growth in the alloy system. A nonlinear curve fit of the model equation to the experimental data is then used to extract the kinetic parameters related to the nucleation and growth of the targeted precipitate. The analytical model and curve fit is applied to the assessment of S' precipitation in aluminum alloy 2024 during artificial aging from the T4 to the T6 temper.
Clifford, Harry J [Los Alamos, NM
2011-03-22
A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.
Fine Sun Sensor Field of View Calibration
NASA Technical Reports Server (NTRS)
Sedlak, Joseph E.; Hashmall, J.; Harman, Richard (Technical Monitor)
2002-01-01
The fine Sun sensor (FSS) used on many spacecraft consists of two independent single-axis sensors, nominally mounted perpendicularly, that detect Sun angle across a typical field of view of +/- 32 degrees. The nonlinear function that maps the measured counts into an observed angle is called the transfer function. The FSS transfer function provided by the manufacturer consists of nine parameters for each of the two sensitive axes. An improved transfer function has been previously reported that achieves a significant accuracy improvement across the entire field of view. This new function expands the parameter set to 12 coefficients per axis and includes cross terms combining counts from both axes. To make best use of the FSS for spacecraft attitude determination, it must be calibrated after launch. We are interested in simplifying the postlaunch calibration procedure for estimating improvements to the 24 parameters in the transfer function. This paper discusses how to recombine the terms of the transfer function to reduce their redundancy without decreasing its accuracy and then presents an attitude dependent procedure for estimating the parameters. The end result is a calibration algorithm that is easier to use and does not sacrifice accuracy. Results of calibration using on-orbit data are presented.
Curve Number estimation from rainfall-runoff data in the Brazilian Cerrado Biome
NASA Astrophysics Data System (ADS)
Oliveira, P. S.; Nearing, M.; Rodrigues, D. B.; Panachuki, E.; Wendland, E.
2013-12-01
The Brazilian Cerrado (Savanna) is considered one of the most important biomes for Brazilian water resources; meanwhile, it is experiencing major losses of its natural landscapes due to the pressures of food and energy production, which has caused changes in hydrological processes. To evaluate these changes hydrologic models have been used. The Curve Number (SCS-CN) method has been widely employed to estimate direct runoff from a given rainfall event, however, there are some uncertainties for estimating this parameter, particularly for use in areas with native vegetation. The objectives of this study were to measure natural rainfall-driven rates of runoff under native Cerrado vegetation and under the main crops found in this biome, and derive associated CN values from five methods. We used six plots of 5 x 20 m (100 m2) in size, with three replications of undisturbed Cerrado and three under bare soil (Ortic Quartzarenic Neosol, hydrological soil class A) and 10 plots of 3.5 x 22.15 m (77.5 m2), with two replications for pasture, soy, millet, sugarcane and bare soil (Dystrophic Red Argisol, hydrological soil class B). Plots were monitored between October 2011 and April 2013. The five methods used to obtain CN values were median, geometric mean, arithmetic mean, nonlinear, least squares fit, and standard asymptotic fit. We found reasonable results for CN calibration for the undisturbed Cerrado only by using the nonlinear least squares fit. CN obtained from the standard table values was not adequate to estimate runoff for this condition. The standard table and the five CN methods presented satisfactory results for the other land covers studied. From our results we can suggest the best CN values for each land cover: Cerrado 49.8 (47.9-51.1), bare soil class-A 83.9 (74.4-93.4), bare soil class-B 88.3 (81.7-94.8), pasture 73.7 (62.9-84.5), soy 83.5 (80.6-86.4), millet 73.9 (67.4-80.4) and sugarcane 83.9 (80.6-87.3). These CN values and ranges provide guidance for
Online Sensor Calibration Monitoring Uncertainty Estimation
Hines, J. Wesley; Rasmussen, Brandon
2005-09-15
Empirical modeling techniques have been applied to online process monitoring to detect equipment and instrumentation degradations. However, few applications provide prediction uncertainty estimates, which can provide a measure of confidence in decisions. This paper presents the development of analytical prediction interval estimation methods for three common nonlinear empirical modeling strategies: artificial neural networks, neural network partial least squares, and local polynomial regression. The techniques are applied to nuclear power plant operational data for sensor calibration monitoring, and the prediction intervals are verified via bootstrap simulation studies.
Calibration Under Uncertainty.
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Polarimetric Palsar Calibration
NASA Astrophysics Data System (ADS)
Touzi, R.; Shimada, M.
2008-11-01
Polarimetric PALSAR system parameters are assessed using data sets collected over various calibration sites. The data collected over the Amazonian forest permits validating the zero Faraday rotation hypotheses near the equator. The analysis of the Amazonian forest data and the response of the corner reflectors deployed during the PALSAR acquisitions lead to the conclusion that the antenna is highly isolated (better than -35 dB). Theses results are confirmed using data collected over the Sweden and Ottawa calibration sites. The 5-m height trihedrals deployed in the Sweden calibration site by the Chalmers University of technology permits accurate measurement of antenna parameters, and detection of 2-3 degree Faraday rotation during day acquisition, whereas no Faraday rotation was noted during night acquisition. Small Faraday rotation angles (2-3 degree) have been measured using acquisitions over the DLR Oberpfaffenhofen and the Ottawa calibration sites. The presence of small but still significant Faraday rotation (2-3 degree) induces a CR return at the cross-polarization HV and VH that should not be interpreted as the actual antenna cross-talk. PALSAR antenna is highly isolated (better than -35 dB), and diagonal antenna distortion matrices (with zero cross-talk terms) can be used for accurate calibration of PALSAR polarimetric data.
NASA Astrophysics Data System (ADS)
di Cesare, M. A.; Hammersley, P. L.; Rodriguez Espinosa, J. M.
2006-06-01
We are currently developing the calibration programme for GTC using techniques similar to the ones use for the space telescope calibration (Hammersley et al. 1998, A&AS, 128, 207; Cohen et al. 1999, AJ, 117, 1864). We are planning to produce a catalogue with calibration stars which are suitable for a 10-m telescope. These sources will be not variable, non binary and do not have infrared excesses if they are to be used in the infrared. The GTC science instruments require photometric calibration between 0.35 and 2.5 microns. The instruments are: OSIRIS (Optical System for Imaging low Resolution Integrated Spectroscopy), ELMER and EMIR (Espectrógrafo Multiobjeto Infrarrojo) and the Acquisition and Guiding boxes (Di Césare, Hammersley, & Rodriguez Espinosa 2005, RevMexAA Ser. Conf., 24, 231). The catalogue will consist of 30 star fields distributed in all of North Hemisphere. We will use fields containing sources over the range 12 to 22 magnitude, and spanning a wide range of spectral types (A to M) for the visible and near infrared. In the poster we will show the method used for selecting these fields and we will present the analysis of the data on the first calibration fields observed.
Curved PVDF airborne transducer.
Wang, H; Toda, M
1999-01-01
In the application of airborne ultrasonic ranging measurement, a partially cylindrical (curved) PVDF transducer can effectively couple ultrasound into the air and generate strong sound pressure. Because of its geometrical features, the ultrasound beam angles of a curved PVDF transducer can be unsymmetrical (i.e., broad horizontally and narrow vertically). This feature is desired in some applications. In this work, a curved PVDF air transducer is investigated both theoretically and experimentally. Two resonances were observed in this transducer. They are length extensional mode and flexural bending mode. Surface vibration profiles of these two modes were measured by a laser vibrometer. It was found from the experiment that the surface vibration was not uniform along the curvature direction for both vibration modes. Theoretical calculations based on a model developed in this work confirmed the experimental results. Two displacement peaks were found in the piezoelectric active direction of PVDF film for the length extensional mode; three peaks were found for the flexural bending mode. The observed peak positions were in good agreement with the calculation results. Transient surface displacement measurements revealed that vibration peaks were in phase for the length extensional mode and out of phase for the flexural bending mode. Therefore, the length extensional mode can generate a stronger ultrasound wave than the flexural bending mode. The resonance frequencies and vibration amplitudes of the two modes strongly depend on the structure parameters as well as the material properties. For the transducer design, the theoretical model developed in this work can be used to optimize the ultrasound performance.
Magnetism in curved geometries
Streubel, Robert; Fischer, Peter; Kronast, Florian; ...
2016-08-17
Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii–Moriya-like interaction. Asmore » a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. Finally, these recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.« less
Magnetism in curved geometries
Streubel, Robert; Fischer, Peter; Kronast, Florian; Kravchuk, Volodymyr P.; Sheka, Denis D.; Gaididei, Yuri; Schmidt, Oliver G.; Makarov, Denys
2016-08-17
Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii–Moriya-like interaction. As a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. Finally, these recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.
Magnetism in curved geometries
NASA Astrophysics Data System (ADS)
Streubel, Robert; Fischer, Peter; Kronast, Florian; Kravchuk, Volodymyr P.; Sheka, Denis D.; Gaididei, Yuri; Schmidt, Oliver G.; Makarov, Denys
2016-09-01
Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii-Moriya-like interaction. As a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. These recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.
NASA Astrophysics Data System (ADS)
Kalnajs, Agris J.
One can obtain a fairly good understanding of the relation between axially symmetric mass distributions and the rotation curves they produce without resorting to calculations. However it does require a break with tradition. The first step consists of replacing quantities such as surface density, volume density, and circular velocity with the mass in a ring, mass in a spherical shell, and the square of the circular velocity, or more precisely with 2 pi G r mu(r), 4 pi G r^2 rho(r), and Vc^2 (r). These three quantities all have the same dimensions, and are related to each other by scale-free linear operators. The second step consists of introducing ln(r) as the coordinate. On the log scale the scale-free operators becomes the more familiar convolution operations. Convolutions are easily handled by Fourier techniques and a surface density can be converted into a rotation curve or volume density in a small fraction of a second. A simple plot of 2 pi G r mu(r) as a function of ln(r) reveals the relative contributions of different radii to Vc^2(r). Such a plot also constitutes a sanity test for the fitting of various laws to photometric data. There are numerous examples in the literature of excellent fits to the tails that lack data or are poor fits around the maximum of 2 pi G r mu(r). I will discuss some exact relations between the above three quantities as well as some empirical observations such as the near equality of the maxima of 2 pi G r mu(r) and Vc^2 (r) curves for flat mass distributions.
NASA Astrophysics Data System (ADS)
Oja, T.; Nikolenko, T.; Türk, K.; Ellmann, A.; Jürgenson, H.
2010-05-01
Rigorous calibration of relative spring gravimeter is always needed for obtaining reliable results from the terrestial gravimetric surveys. This study was based on the data of relative gravimeters, observed repeatedly since 2001 on several specially designated calibration lines in Estonia. The two types of gravimeters - LaCoste&Romberg (LCR) G-type (metal spring) and Scintrex CG5 (quartz spring) systems - were investigated in this study. At first the calibration function of the gravimeter's manufacturer was used to convert the field gravity data into the centimetre-gram-second system (CGS) units. After the reduction of converted readings (corrected for the tides, atmosphere, observation elevation etc.), both the linear and nonlinear correctional components of calibration function were parameterized and estimated through the linearised least squares (LS) adjustment. The LS estimates were tested statistically and only the significant corrections of the calibration function were included to the subsequent data conversion. As a result of the study, remarkably improved uncertainty estimations for the results collected with LCR-G (No 4, 113, 115, 191) and CG5 (No 36, 10092) gravimeters in Estonia are presented.
Complementary Curves of Descent
2012-11-16
provision of law , no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid...curves of descent 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) US Naval Academy,Physics Department,Annapolis,MD,21402-1363 8. PERFORMING ORGANIZATION
1975-07-01
agree to say four places by successive choices of finer subdivisions of the grid. The accuracy obtained by this method Is rot quite unexpected—see for...iltering, " R~v . Francais d’ ~•_!o:n~ti~, ~. l ’J73 , 3-54. ( 2L ; H. S . U•JLy , "Pedliza tion of nonlinear filters," ~!:Q~-·-~..c!.5E£...... .Q
Calibration of Contactless Pulse Oximetry
Bartula, Marek; Bresch, Erik; Rocque, Mukul; Meftah, Mohammed; Kirenko, Ihor
2017-01-01
BACKGROUND: Contactless, camera-based photoplethysmography (PPG) interrogates shallower skin layers than conventional contact probes, either transmissive or reflective. This raises questions on the calibratability of camera-based pulse oximetry. METHODS: We made video recordings of the foreheads of 41 healthy adults at 660 and 840 nm, and remote PPG signals were extracted. Subjects were in normoxic, hypoxic, and low temperature conditions. Ratio-of-ratios were compared to reference Spo2 from 4 contact probes. RESULTS: A calibration curve based on artifact-free data was determined for a population of 26 individuals. For an Spo2 range of approximately 83% to 100% and discarding short-term errors, a root mean square error of 1.15% was found with an upper 99% one-sided confidence limit of 1.65%. Under normoxic conditions, a decrease in ambient temperature from 23 to 7°C resulted in a calibration error of 0.1% (±1.3%, 99% confidence interval) based on measurements for 3 subjects. PPG signal strengths varied strongly among individuals from about 0.9 × 10−3 to 4.6 × 10−3 for the infrared wavelength. CONCLUSIONS: For healthy adults, the results present strong evidence that camera-based contactless pulse oximetry is fundamentally feasible because long-term (eg, 10 minutes) error stemming from variation among individuals expressed as A*rms is significantly lower (<1.65%) than that required by the International Organization for Standardization standard (<4%) with the notion that short-term errors should be added. A first illustration of such errors has been provided with A**rms = 2.54% for 40 individuals, including 6 with dark skin. Low signal strength and subject motion present critical challenges that will have to be addressed to make camera-based pulse oximetry practically feasible. PMID:27258081
A force calibration standard for magnetic tweezers
NASA Astrophysics Data System (ADS)
Yu, Zhongbo; Dulin, David; Cnossen, Jelmer; Köber, Mariana; van Oene, Maarten M.; Ordu, Orkide; Berghuis, Bojk A.; Hensgens, Toivo; Lipfert, Jan; Dekker, Nynke H.
2014-12-01
To study the behavior of biological macromolecules and enzymatic reactions under force, advances in single-molecule force spectroscopy have proven instrumental. Magnetic tweezers form one of the most powerful of these techniques, due to their overall simplicity, non-invasive character, potential for high throughput measurements, and large force range. Drawbacks of magnetic tweezers, however, are that accurate determination of the applied forces can be challenging for short biomolecules at high forces and very time-consuming for long tethers at low forces below ˜1 piconewton. Here, we address these drawbacks by presenting a calibration standard for magnetic tweezers consisting of measured forces for four magnet configurations. Each such configuration is calibrated for two commonly employed commercially available magnetic microspheres. We calculate forces in both time and spectral domains by analyzing bead fluctuations. The resulting calibration curves, validated through the use of different algorithms that yield close agreement in their determination of the applied forces, span a range from 100 piconewtons down to tens of femtonewtons. These generalized force calibrations will serve as a convenient resource for magnetic tweezers users and diminish variations between different experimental configurations or laboratories.
Markward, Nathan J; Fisher, William P
2004-01-01
This project demonstrates how to calibrate different samples and scales of genomic information to a common scale of genomic measurement. 1,113 persons were genotyped at the 13 Combined DNA Index System (CODIS) short tandem repeat (STR) marker loci used by the Federal Bureau of Investigation (FBI) for human identity testing. A measurement model of form ln[(P(nik))/(1-P(nik))] = B(n)-D(i)-L(k) is used to construct person measures and locus calibrations from information contained in the CODIS database. Winsteps (Wright and Linacre, 2003) is employed to maximize initial estimates and to investigate the necessity and sufficiency of different rating classification schema. Model fit is satisfactory in all analyses. Study outcomes are found in Tables 1-6. Additive, divisible, and interchangeable measures and calibrations can be created from raw genomic information that transcend sample- and scale-dependencies associated with racial and ethnic descent, chromosomal location, and locus-specific allele expansion structures.
Calibration Systems Final Report
Myers, Tanya L.; Broocks, Bryan T.; Phillips, Mark C.
2006-02-01
The Calibration Systems project at Pacific Northwest National Laboratory (PNNL) is aimed towards developing and demonstrating compact Quantum Cascade (QC) laser-based calibration systems for infrared imaging systems. These on-board systems will improve the calibration technology for passive sensors, which enable stand-off detection for the proliferation or use of weapons of mass destruction, by replacing on-board blackbodies with QC laser-based systems. This alternative technology can minimize the impact on instrument size and weight while improving the quality of instruments for a variety of missions. The potential of replacing flight blackbodies is made feasible by the high output, stability, and repeatability of the QC laser spectral radiance.
LeBlanc, R.
1987-08-01
The TA489A Calibrator, designed to operate in the MA164 Digital Data Acquisition System, is used to calibrate up to 128 analog-to-digital recording channels. The TA489A calibrates using a dc Voltage Source or any of several special calibration modes. Calibration schemes are stored in the TA489A memory and are initiated locally or remotely through a Command Link.
Energy calibration of the fly's eye detector
NASA Technical Reports Server (NTRS)
Baltrusaitis, R. M.; Cassiday, G. L.; Cooper, R.; Elbert, J. W.; Gerhardy, P. R.; Ko, S.; Loh, E. C.; Mizumoto, Y.; Sokolsky, P.; Steck, D.
1985-01-01
The methods used to calibrate the Fly's eye detector to evaluate the energy of EAS are discussed. The energy of extensive air showers (EAS) as seen by the Fly's Eye detector are obtained from track length integrals of observed shower development curves. The energy of the parent cosmic ray primary is estimated by applying corrections to account for undetected energy in the muon, neutrino and hadronic channels. Absolute values for E depend upon the measurement of shower sizes N sub e(x). The following items are necessary to convert apparent optical brightness into intrinsical optical brightness: (1) an assessment of those factors responsible for light production by the relativistic electrons in an EAS and the transmission of light thru the atmosphere, (2) calibration of the optical detection system, and (3) a knowledge of the trajectory of the shower.
Calibrated nanoscale dopant profiling using a scanning microwave microscope
Huber, H. P.; Hochleitner, M.; Hinterdorfer, P.; Humer, I.; Smoliner, J.; Fenner, M.; Moertelmaier, M.; Rankl, C.; Tanbakuchi, H.; Kienberger, F.; Imtiaz, A.; Wallis, T. M.; Kabos, P.; Kopanski, J. J.
2012-01-01
The scanning microwave microscope is used for calibrated capacitance spectroscopy and spatially resolved dopant profiling measurements. It consists of an atomic force microscope combined with a vector network analyzer operating between 1-20 GHz. On silicon semiconductor calibration samples with doping concentrations ranging from 10{sup 15} to 10{sup 20} atoms/cm{sup 3}, calibrated capacitance-voltage curves as well as derivative dC/dV curves were acquired. The change of the capacitance and the dC/dV signal is directly related to the dopant concentration allowing for quantitative dopant profiling. The method was tested on various samples with known dopant concentration and the resolution of dopant profiling determined to 20% while the absolute accuracy is within an order of magnitude. Using a modeling approach the dopant profiling calibration curves were analyzed with respect to varying tip diameter and oxide thickness allowing for improvements of the calibration accuracy. Bipolar samples were investigated and nano-scale defect structures and p-n junction interfaces imaged showing potential applications for the study of semiconductor device performance and failure analysis.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented