NASA Astrophysics Data System (ADS)
Jiang, Shyh-Biau; Yeh, Tse-Liang; Chen, Li-Wu; Liu, Jann-Yenq; Yu, Ming-Hsuan; Huang, Yu-Qin; Chiang, Chen-Kiang; Chou, Chung-Jen
2018-05-01
In this study, we construct a photomultiplier calibration system. This calibration system can help scientists measuring and establishing the characteristic curve of the photon count versus light intensity. The system uses an innovative 10-fold optical attenuator to enable an optical power meter to calibrate photomultiplier tubes which have the resolution being much greater than that of the optical power meter. A simulation is firstly conducted to validate the feasibility of the system, and then the system construction, including optical design, circuit design, and software algorithm, is realized. The simulation generally agrees with measurement data of the constructed system, which are further used to establish the characteristic curve of the photon count versus light intensity.
Peng, Rong-fei; He, Jia-yao; Zhang, Zhan-xia
2002-02-01
The performances of a self-constructed visible AOTF spectrophotometer are presented. The wavelength calibration of AOTF1 and AOTF2 are performed with a didymium glass using a fourth-order polynomial curve fitting method. The absolute error of the peak position is usually less than 0.7 nm. Compared with the commercial UV1100 spectrophotometer, the scanning speed of the AOTF spectrophotometer is much more faster, but the resolution depends on the quality of AOTF. The absorption spectra and the calibration curves of copper sulfate and alizarin red obtained with AOTF1(Institute for Silicate, Shanghai China) and AOTF2 (Brimrose U.S.A) respectively are presented. Their corresponding correlation coefficients of the calibration curves are 0.9991 and 0.9990 respectively. Preliminary results show that the self-constructed AOTF spectrophotometer is feasible.
He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia
2002-02-01
A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.
Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.
2012-01-01
Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.
Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Tests)
NASA Technical Reports Server (NTRS)
Pastor-Barsi, Christine; Allen, Arrington E.
2013-01-01
A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel (IRT) was completed in 2012 following the major modifications to the facility that included replacement of the refrigeration plant and heat exchanger. The calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT.
Pajic, J; Rakic, B; Jovicic, D; Milovanovic, A
2014-10-01
Biological dosimetry using chromosome damage biomarkers is a valuable dose assessment method in cases of radiation overexposure with or without physical dosimetry data. In order to estimate dose by biodosimetry, any biological dosimetry service have to have its own dose response calibration curve. This paper reveals the results obtained after irradiation of blood samples from fourteen healthy male and female volunteers in order to establish biodosimetry in Serbia and produce dose response calibration curves for dicentrics and micronuclei. Taking into account pooled data from all the donors, the resultant fitted curve for dicentrics is: Ydic=0.0009 (±0.0003)+0.0421 (±0.0042)×D+0.0602 (±0.0022)×D(2); and for micronuclei: Ymn=0.0104 (±0.0015)+0.0824 (±0.0050)×D+0.0189 (±0.0017)×D(2). Following establishment of the dose response curve, a validation experiment was carried out with four blood samples. Applied and estimated doses were in good agreement. On this basis, the results reported here give us confidence to apply both calibration curves for future biological dosimetry requirements in Serbia. Copyright © 2014 Elsevier B.V. All rights reserved.
Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2004 and 2005 Tests)
NASA Technical Reports Server (NTRS)
Arrington, E. Allen; Pastor, Christine M.; Gonsalez, Jose C.; Curry, Monroe R., III
2010-01-01
A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel was completed in 2004 following the replacement of the inlet guide vanes upstream of the tunnel drive system and improvement to the facility total temperature instrumentation. This calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT. The 2004 test was also the first to use the 2-D RTD array, an improved total temperature calibration measurement platform.
Marine04 Marine radiocarbon age calibration, 26 ? 0 ka BP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughen, K; Baille, M; Bard, E
2004-11-01
New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration datasets extend an additional 2000 years, from 0-26 ka cal BP (Before Present, 0 cal BP = AD 1950), and provide much higher resolution, greater precision and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically dated tree-ring samples, converted with a box-diffusion model to marine mixed-layer ages, cover the period from 0-10.5 ka cal BP. Beyond 10.5 ka cal BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals.more » The marine records are corrected with site-specific {sup 14}C reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 ka cal BP. A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the radiocarbon age to calculate the underlying calibration curve. The marine datasets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring datasets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al.« less
Rastkhah, E; Zakeri, F; Ghoranneviss, M; Rajabpour, M R; Farshidpour, M R; Mianji, F; Bayat, M
2016-03-01
An in vitro study of the dose responses of human peripheral blood lymphocytes was conducted with the aim of creating calibrated dose-response curves for biodosimetry measuring up to 4 Gy (0.25-4 Gy) of gamma radiation. The cytokinesis-blocked micronucleus (CBMN) assay was employed to obtain the frequencies of micronuclei (MN) per binucleated cell in blood samples from 16 healthy donors (eight males and eight females) in two age ranges of 20-34 and 35-50 years. The data were used to construct the calibration curves for men and women in two age groups, separately. An increase in micronuclei yield with the dose in a linear-quadratic way was observed in all groups. To verify the applicability of the constructed calibration curve, MN yields were measured in peripheral blood lymphocytes of two real overexposed subjects and three irradiated samples with unknown dose, and the results were compared with dose values obtained from measuring dicentric chromosomes. The comparison of the results obtained by the two techniques indicated a good agreement between dose estimates. The average baseline frequency of MN for the 130 healthy non-exposed donors (77 men and 55 women, 20-60 years old divided into four age groups) ranged from 6 to 21 micronuclei per 1000 binucleated cells. Baseline MN frequencies were higher for women and for the older age group. The results presented in this study point out that the CBMN assay is a reliable, easier and valuable alternative method for biological dosimetry.
NASA Astrophysics Data System (ADS)
Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia
2016-09-01
In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining PCC of lymphocytes and CHO cells with FISH using PNA probes after 10 h and 24 h after irradiation, and, finally, calibration data of excess PCC fragments (Giemsa) to be used if human blood is available immediately after irradiation or within 24 h.
Debode, Frédéric; Marien, Aline; Janssen, Eric; Berben, Gilbert
2010-03-01
Five double-target multiplex plasmids to be used as calibrants for GMO quantification were constructed. They were composed of two modified targets associated in tandem in the same plasmid: (1) a part of the soybean lectin gene and (2) a part of the transgenic construction of the GTS40-3-2 event. Modifications were performed in such a way that each target could be amplified with the same primers as those for the original target from which they were derived but such that each was specifically detected with an appropriate probe. Sequence modifications were done to keep the parameters of the new target as similar as possible to those of its original sequence. The plasmids were designed to be used either in separate reactions or in multiplex reactions. Evidence is given that with each of the five different plasmids used in separate wells as a calibrant for a different copy number, a calibration curve can be built. When the targets were amplified together (in multiplex) and at different concentrations inside the same well, the calibration curves showed that there was a competition effect between the targets and this limits the range of copy numbers for calibration over a maximum of 2 orders of magnitude. Another possible application of multiplex plasmids is discussed.
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-01-01
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-12-28
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.
A dose-response curve for biodosimetry from a 6 MV electron linear accelerator
Lemos-Pinto, M.M.P.; Cadena, M.; Santos, N.; Fernandes, T.S.; Borges, E.; Amaral, A.
2015-01-01
Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates. PMID:26445334
Water content determination of superdisintegrants by means of ATR-FTIR spectroscopy.
Szakonyi, G; Zelkó, R
2012-04-07
Water contents of superdisintegrant pharmaceutical excipients were determined by attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy using simple linear regression. Water contents of the investigated three common superdisintegrants (crospovidone, croscarmellose sodium, sodium starch glycolate) varied over a wide range (0-24%, w/w). In the case of crospovidone three different samples from two manufacturers were examined in order to study the effects of different grades on the calibration curves. Water content determinations were based on strong absorption of water between 3700 and 2800 cm⁻¹, other spectral changes associated with the different compaction of samples on the ATR crystal using the same pressure were followed by the infrared region between 1510 and 1050 cm⁻¹. The calibration curves were constructed using the ratio of absorbance intensities in the two investigated regions. Using appropriate baseline correction the linearity of the calibration curves was maintained over the entire investigated water content regions and the effect of particle size on the calibration was not significant in the case of crospovidones from the same manufacturer. The described method enables the water content determination of powdered hygroscopic materials containing homogeneously distributed water. Copyright © 2012 Elsevier B.V. All rights reserved.
Direct Estimate of Cocoa Powder Content in Cakes by Colorimetry and Photoacoustic Spectroscopy
NASA Astrophysics Data System (ADS)
Dóka, O.; Bicanic, D.; Kulcsár, R.
2014-12-01
Cocoa is a very important ingredient in the food industry and largely consumed worldwide. In this investigation, colorimetry and photoacoustic spectroscopy were used to directly assess the content of cocoa powder in cakes; both methods provided satisfactory results. The calibration curve was constructed using a series of home-made cakes containing varying amount of cocoa powder. Then, at a later stage, the same calibration curve was used to quantify the cocoa content of several commercially available cakes. For self-made cakes, the relationship between the PAS signal and the content of cocoa powder was linear while a quadratic dependence was obtained for the colorimetric index (brightness) and total color difference ().
NASA Technical Reports Server (NTRS)
Demoss, J. F. (Compiler)
1971-01-01
Calibration curves for the Apollo 16 command service module pulse code modulation downlink and onboard display are presented. Subjects discussed are: (1) measurement calibration curve format, (2) measurement identification, (3) multi-mode calibration data summary, (4) pulse code modulation bilevel events listing, and (5) calibration curves for instrumentation downlink and meter link.
The Sloan Digital Sky Survey-II: Photometry and Supernova Ia Light Curves from the 2005 Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holtzman, Jon A.; /New Mexico State U.; Marriner, John
2010-08-26
We present ugriz light curves for 146 spectroscopically confirmed or spectroscopically probable Type Ia supernovae from the 2005 season of the SDSS-II Supernova survey. The light curves have been constructed using a photometric technique that we call scene modeling, which is described in detail here; the major feature is that supernova brightnesses are extracted from a stack of images without spatial resampling or convolution of the image data. This procedure produces accurate photometry along with accurate estimates of the statistical uncertainty, and can be used to derive photometry taken with multiple telescopes. We discuss various tests of this technique thatmore » demonstrate its capabilities. We also describe the methodology used for the calibration of the photometry, and present calibrated magnitudes and fluxes for all of the spectroscopic SNe Ia from the 2005 season.« less
Photometric behavior and general characteristics of the nova HR Delphini
NASA Astrophysics Data System (ADS)
Raikova, D.
The light curve and the B-V color-index curve of HR Del were constructed on the basis of published UBV observations. From the normal color indices, the effective photosphere temperature and radius were determined using calibrations for normal stars. As the brightness reached its peak, the effective photosphere was expanding with a velocity of approximately 23 km/s, which is more than 10 times less than the gas velocity. This phenomenon is explained by decreasing continuous opacity as the ejected gas expands.
Statistical behavior of ten million experimental detection limits
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-02-01
Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.
Germanium resistance thermometer calibration at superfluid helium temperatures
NASA Technical Reports Server (NTRS)
Mason, F. C.
1985-01-01
The rapid increase in resistance of high purity semi-conducting germanium with decreasing temperature in the superfluid helium range of temperatures makes this material highly adaptable as a very sensitive thermometer. Also, a germanium thermometer exhibits a highly reproducible resistance versus temperature characteristic curve upon cycling between liquid helium temperatures and room temperature. These two factors combine to make germanium thermometers ideally suited for measuring temperatures in many cryogenic studies at superfluid helium temperatures. One disadvantage, however, is the relatively high cost of calibrated germanium thermometers. In space helium cryogenic systems, many such thermometers are often required, leading to a high cost for calibrated thermometers. The construction of a thermometer calibration cryostat and probe which will allow for calibrating six germanium thermometers at one time, thus effecting substantial savings in the purchase of thermometers is considered.
1989-09-01
Itterconnection wiring diagram for the ESA ............................... 34 3-13 Typical gain versus total count curve for CEM...42 3-16 Calibration curve for energy bin 12 of the ion ESA ....................... 43 3-17 Flight ESA S/N001...Calibration curves for SPM S/N001 ......................................... 67 4-11 Calibration curves for SPM S/N002
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120
Rondeau, M; Rouleau, M
1981-06-01
Using semen from bull, boar and stallion as well as different spectrophotometers, we established the calibration curves relating the optical density of a sperm sample to the sperm count obtained on the hemacytometer. The results show that, for a given spectrophotometer, the calibration curve is not characteristic of the animal species we studied. The differences in size of the spermatozoa are probably too small to account for the anticipated specificity of the calibration curve. Furthermore, the fact that different dilution rates must be used, because of the vastly different concentrations of spermatozoa which is characteristic of those species, has no effect on the calibration curves since the dilution rate is shown to be artefactual. On the other hand, for a given semen, the calibration curve varies depending upon the spectrophotometry used. However, if two instruments have the same characteristic in terms of spectral bandwidth, the calibration curves are not statistically different.
[Developing a predictive model for the caregiver strain index].
Álvarez-Tello, Margarita; Casado-Mejía, Rosa; Praena-Fernández, Juan Manuel; Ortega-Calvo, Manuel
Patient homecare with multiple morbidities is an increasingly common occurrence. The caregiver strain index is tool in the form of questionnaire that is designed to measure the perceived burden of those who care for their families. The aim of this study is to construct a diagnostic nomogram of informal caregiver burden using data from a predictive model. The model was drawn up using binary logistic regression and the questionnaire items as dichotomous factors. The dependent variable was the final score obtained with the questionnaire but categorised in accordance with that in the literature. Scores between 0 and 6 were labelled as "no" (no caregiver stress) and at or greater than 7 as "yes". The version 3.1.1R statistical software was used. To construct confidence intervals for the ROC curve 2000 boot strap replicates were used. A sample of 67 caregivers was obtained. A diagnosing nomogram was made up with its calibration graph (Brier scaled = 0.686, Nagelkerke R 2 =0.791), and the corresponding ROC curve (area under the curve=0.962). The predictive model generated using binary logistic regression and the nomogram contain four items (1, 4, 5 and 9) of the questionnaire. R plotting functions allow a very good solution for validating a model like this. The area under the ROC curve (0.96; 95% CI: 0.994-0.941) achieves a high discriminative value. Calibration also shows high goodness of fit values, suggesting that it may be clinically useful in community nursing and geriatric establishments. Copyright © 2015 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.
McCabe, Bradley P; Speidel, Michael A; Pike, Tina L; Van Lysel, Michael S
2011-04-01
In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was +/- 7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.
Waters of Hydration of Cupric Hydrates: A Comparison between Heating and Absorbance Methods
ERIC Educational Resources Information Center
Barlag, Rebecca; Nyasulu, Frazier
2011-01-01
The empirical formulas of four cupric hydrates are determined by measuring the absorbance in aqueous solution. The Beer-Lambert Law is verified by constructing a calibration curve of absorbance versus known Cu[superscript 2+](aq) concentration. A solution of the unknown hydrate is prepared by using 0.2-0.3 g of hydrate, and water is added such…
NASA Astrophysics Data System (ADS)
Li, Zhengxiang; Gonzalez, J. E.; Yu, Hongwei; Zhu, Zong-Hong; Alcaniz, J. S.
2016-02-01
We apply two methods, i.e., the Gaussian processes and the nonparametric smoothing procedure, to reconstruct the Hubble parameter H (z ) as a function of redshift from 15 measurements of the expansion rate obtained from age estimates of passively evolving galaxies. These reconstructions enable us to derive the luminosity distance to a certain redshift z , calibrate the light-curve fitting parameters accounting for the (unknown) intrinsic magnitude of type Ia supernova (SNe Ia), and construct cosmological model-independent Hubble diagrams of SNe Ia. In order to test the compatibility between the reconstructed functions of H (z ), we perform a statistical analysis considering the latest SNe Ia sample, the so-called joint light-curve compilation. We find that, for the Gaussian processes, the reconstructed functions of Hubble parameter versus redshift, and thus the following analysis on SNe Ia calibrations and cosmological implications, are sensitive to prior mean functions. However, for the nonparametric smoothing method, the reconstructed functions are not dependent on initial guess models, and consistently require high values of H0, which are in excellent agreement with recent measurements of this quantity from Cepheids and other local distance indicators.
Refinement of moisture calibration curves for nuclear gage : interim report no. 1.
DOT National Transportation Integrated Search
1972-01-01
This study was initiated to determine the correct moisture calibration curves for different nuclear gages. It was found that the Troxler Model 227 had a linear response between count ratio and moisture content. Also, the two calibration curves for th...
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.
Purpose: In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity inmore » scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was {+-}7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.« less
X-Ray Fluorescence Determination of the Surface Density of Chromium Nanolayers
NASA Astrophysics Data System (ADS)
Mashin, N. I.; Chernjaeva, E. A.; Tumanova, A. N.; Ershov, A. A.
2014-01-01
An auxiliary system consisting of thin-film layers of chromium deposited on a polymer film substrate is used to construct calibration curves for the relative intensities of the K α lines of chromium on bulk substrates of different elements as functions of the chromium surface density in the reference samples. Correction coefficients are calculated to take into account the absorption of primary radiation from an x-ray tube and analytical lines of the constituent elements of the substrate. A method is developed for determining the surface density of thin films of chromium when test and calibration samples are deposited on substrates of different materials.
Utsumi, Takanobu; Oka, Ryo; Endo, Takumi; Yano, Masashi; Kamijima, Shuichi; Kamiya, Naoto; Fujimura, Masaaki; Sekita, Nobuyuki; Mikami, Kazuo; Hiruta, Nobuyuki; Suzuki, Hiroyoshi
2015-11-01
The aim of this study is to validate and compare the predictive accuracy of two nomograms predicting the probability of Gleason sum upgrading between biopsy and radical prostatectomy pathology among representative patients with prostate cancer. We previously developed a nomogram, as did Chun et al. In this validation study, patients originated from two centers: Toho University Sakura Medical Center (n = 214) and Chibaken Saiseikai Narashino Hospital (n = 216). We assessed predictive accuracy using area under the curve values and constructed calibration plots to grasp the tendency for each institution. Both nomograms showed a high predictive accuracy in each institution, although the constructed calibration plots of the two nomograms underestimated the actual probability in Toho University Sakura Medical Center. Clinicians need to use calibration plots for each institution to correctly understand the tendency of each nomogram for their patients, even if each nomogram has a good predictive accuracy. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
2017-11-01
sent from light-emitting diodes (LEDs) of 5 colors ( green , red, white, amber, and blue). Experiment 1 involved controlled laboratory measurements of...A-4 Red LED calibration curves and quadratic curve fits with R2 values . 37 Fig. A-5 Green LED calibration curves and quadratic curve fits with R2...36 Table A-4 Red LED calibration measurements ................................................... 36 Table A-5 Green LED
McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.; Van Lysel, Michael S.
2011-01-01
Purpose: In this study, newly formulated XR-RV3 GafChromic® film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was ±7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases. PMID:21626925
A new form of the calibration curve in radiochromic dosimetry. Properties and results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; B
Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve hasmore » been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.« less
A new form of the calibration curve in radiochromic dosimetry. Properties and results.
Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio
2016-07-01
This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer-Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.
The use of the dicentric assay for biological dosimetry for radiation accidents in Bulgaria.
Hadjidekova, Valeria; Hristova, Rositsa; Ainsbury, Elizabeth A; Atanasova, Petya; Popova, Ljubomira; Staynova, Albena
2010-02-01
This paper details the construction of a 137Cs gamma calibration curve that has been established for dicentric assay and the testing and validation of the curve through biological dosimetry in three situations of suspected workplace overexposure that arose accidentally or through negligence or lack of appropriate safety measures. The three situations were: (1) suspected 137Cs contamination in a factory air supply; (2) suspected exposure to an industrial 192Ir source; and (3) accidental exposure of construction workers to radiation from a 60Co radiotherapy source in a hospital medical physics department. From a total of 24 potentially-exposed subjects, only one worker was found to have a statistically significant dose (0.16 Gy, 95% confidence intervals 0.02-0.43 Gy). In all other cases, the main function of the biological dosimetry was to reassure the subjects that any dose received was low.
Ozcan, Hakki Mevlut; Sagiroglu, Ayten
2010-08-01
In this study the biosensor was constructed by immobilizing tissue homogenate of banana peel onto a glassy carbon electrode surface. Effects of immobilization materials amounts, effects of pH, buffer concentration and temperature on biosensor response were studied. In addition, the detection ranges of 13 phenolic compounds were obtained with the help of the calibration graphs. Storage stability, repeatability of the biosensor, inhibitory effect and sample applications were also investigated. A typical calibration curve for the sensor revealed a linear range of 10-80 microM catechol. In reproducibility studies, variation coefficient and standard deviation were calculated as 2.69%, 1.44 x 10(-3) microM, respectively.
New approach to calibrating bed load samplers
Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.
1985-01-01
Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN
2010-08-03
A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.
Zhu, Lin; Ruan, Jian-Qing; Li, Na; Fu, Peter P; Ye, Yang; Lin, Ge
2016-03-01
Nearly 50% of naturally-occurring pyrrolizidine alkaloids (PAs) are hepatotoxic, and the majority of hepatotoxic PAs are retronecine-type PAs (RET-PAs). However, quantitative measurement of PAs in herbs/foodstuffs is often difficult because most of reference PAs are unavailable. In this study, a rapid, selective, and sensitive UHPLC-QTOF-MS method was developed for the estimation of RET-PAs in herbs without requiring corresponding standards. This method is based on our previously established characteristic and diagnostic mass fragmentation patterns and the use of retrorsine for calibration. The use of a single RET-PA (i.e. retrorsine) for construction of calibration was based on high similarities with no significant differences demonstrated by the calibration curves constructed by peak areas of extract ion chromatograms of fragment ion at m/z 120.0813 or 138.0919 versus concentrations of five representative RET-PAs. The developed method was successfully applied to measure a total content of toxic RET-PAs of diversified structures in fifteen potential PA-containing herbs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Kosaka, Ryo; Fukuda, Kyohei; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi
2013-01-01
In order to monitor the condition of a patient using a left ventricular assist system (LVAS), blood flow should be measured. However, the reliable determination of blood-flow rate has not been established. The purpose of the present study is to develop a noninvasive blood-flow meter using a curved cannula with zero compensation for an axial flow blood pump. The flow meter uses the centrifugal force generated by the flow rate in the curved cannula. Two strain gauges served as sensors. The first gauges were attached to the curved area to measure static pressure and centrifugal force, and the second gauges were attached to straight area to measure static pressure. The flow rate was determined by the differences in output from the two gauges. The zero compensation was constructed based on the consideration that the flow rate could be estimated during the initial driving condition and the ventricular suction condition without using the flow meter. A mock circulation loop was constructed in order to evaluate the measurement performance of the developed flow meter with zero compensation. As a result, the zero compensation worked effectively for the initial calibration and the zero-drift of the measured flow rate. We confirmed that the developed flow meter using a curved cannula with zero compensation was able to accurately measure the flow rate continuously and noninvasively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Ogawa, Kazuma; Kaneta, Takashi
2016-01-01
Microfluidic paper-based analytical devices (μPADs) were used to detect the iron ion content in the water of a natural hot spring in order to assess the applicability of this process to the environmental analysis of natural water. The μPADs were fabricated using a wax printer after the addition of hydroxylamine into the detection reservoirs to reduce Fe(3+) to Fe(2+), 1,10-phenanthroline for the forming of a complex, and poly(acrylic acid) for ion-pair formation with an acetate buffer (pH 4.7). The calibration curve of Fe(3+) showed a linearity that ranged from 100 to 1000 ppm in the semi-log plot whereas the color intensity was proportional to the concentration of Fe(3+) and ranged from 40 to 350 ppm. The calibration curve represented the daily fluctuation in successive experiments during four days, which indicated that a calibration curve must be constructed for each day. When freshly prepared μPADs were compared with stored ones, no significant difference was found. The μPADs were applied to the determination of Fe(3+) in a sample of water from a natural hot spring. Both the accuracy and the precision of the μPAD method were evaluated by comparisons with the results obtained via conventional spectrophotometry. The results of the μPADs were in good agreement with, but less precise than, those obtained via conventional spectrophotometry. Consequently, the μPADs offer advantages that include rapid and miniaturized operation, although the precision was poorer than that of conventional spectrophotometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE
2015-06-15
Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to improve dose delivery by optimizing the HU-RSP calibration curve as long as all sources of systematic incongruence are properly modeled.« less
Dried blood spot analysis of creatinine with LC-MS/MS in addition to immunosuppressants analysis.
Koster, Remco A; Greijdanus, Ben; Alffenaar, Jan-Willem C; Touw, Daan J
2015-02-01
In order to monitor creatinine levels or to adjust the dosage of renally excreted or nephrotoxic drugs, the analysis of creatinine in dried blood spots (DBS) could be a useful addition to DBS analysis. We developed a LC-MS/MS method for the analysis of creatinine in the same DBS extract that was used for the analysis of tacrolimus, sirolimus, everolimus, and cyclosporine A in transplant patients with the use of Whatman FTA DMPK-C cards. The method was validated using three different strategies: a seven-point calibration curve using the intercept of the calibration to correct for the natural presence of creatinine in reference samples, a one-point calibration curve at an extremely high concentration in order to diminish the contribution of the natural presence of creatinine, and the use of creatinine-[(2)H3] with an eight-point calibration curve. The validated range for creatinine was 120 to 480 μmol/L (seven-point calibration curve), 116 to 7000 μmol/L (1-point calibration curve), and 1.00 to 400.0 μmol/L for creatinine-[(2)H3] (eight-point calibration curve). The precision and accuracy results for all three validations showed a maximum CV of 14.0% and a maximum bias of -5.9%. Creatinine in DBS was found stable at ambient temperature and 32 °C for 1 week and at -20 °C for 29 weeks. Good correlations were observed between patient DBS samples and routine enzymatic plasma analysis and showed the capability of the DBS method to be used as an alternative for creatinine plasma measurement.
Calibration of thermocouple psychrometers and moisture measurements in porous materials
NASA Astrophysics Data System (ADS)
Guz, Łukasz; Sobczuk, Henryk; Połednik, Bernard; Guz, Ewa
2016-07-01
The paper presents in situ method of peltier psychrometric sensors calibration which allow to determine water potential. Water potential can be easily recalculated into moisture content of the porous material. In order to obtain correct results of water potential, each probe should be calibrated. NaCl salt solutions with molar concentration of 0.4M, 0.7M, 1.0M and 1.4M, were used for calibration which enabled to obtain osmotic potential in range: -1791 kPa to -6487 kPa. Traditionally, the value of voltage generated on thermocouples during wet-bulb temperature depression is calculated in order to determine the calibration function for psychrometric in situ sensors. In the new method of calibration, the field under psychrometric curve along with peltier cooling current and duration was taken into consideration. During calibration, different cooling currents were applied for each salt solution, i.e. 3, 5, 8 mA respectively, as well as different cooling duration for each current (from 2 to 100 sec with 2 sec step). Afterwards, the shape of each psychrometric curve was thoroughly examined and a value of field under psychrometric curve was computed. Results of experiment indicate that there is a robust correlation between field under psychrometric curve and water potential. Calibrations formulas were designated on the basis of these features.
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
Balss, K M; Llanos, G; Papandreou, G; Maryanoff, C A
2008-04-01
Raman spectroscopy was used to differentiate each component found in the CYPHER Sirolimus-eluting Coronary Stent. The unique spectral features identified for each component were then used to develop three separate calibration curves to describe the solid phase distribution found on drug-polymer coated stents. The calibration curves were obtained by analyzing confocal Raman spectral depth profiles from a set of 16 unique formulations of drug-polymer coatings sprayed onto stents and planar substrates. The sirolimus model was linear from 0 to 100 wt % of drug. The individual polymer calibration curves for poly(ethylene-co-vinyl acetate) [PEVA] and poly(n-butyl methacrylate) [PBMA] were also linear from 0 to 100 wt %. The calibration curves were tested on three independent drug-polymer coated stents. The sirolimus calibration predicted the drug content within 1 wt % of the laboratory assay value. The polymer calibrations predicted the content within 7 wt % of the formulation solution content. Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra from five formulations confirmed a linear response to changes in sirolimus and polymer content. Copyright 2007 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
Chen, Rui; Xie, Liping; Xue, Wei; Ye, Zhangqun; Ma, Lulin; Gao, Xu; Ren, Shancheng; Wang, Fubo; Zhao, Lin; Xu, Chuanliang; Sun, Yinghao
2016-09-01
Substantial differences exist in the relationship of prostate cancer (PCa) detection rate and prostate-specific antigen (PSA) level between Western and Asian populations. Classic Western risk calculators, European Randomized Study for Screening of Prostate Cancer Risk Calculator, and Prostate Cancer Prevention Trial Risk Calculator, were shown to be not applicable in Asian populations. We aimed to develop and validate a risk calculator for predicting the probability of PCa and high-grade PCa (defined as Gleason Score sum 7 or higher) at initial prostate biopsy in Chinese men. Urology outpatients who underwent initial prostate biopsy according to the inclusion criteria were included. The multivariate logistic regression-based Chinese Prostate Cancer Consortium Risk Calculator (CPCC-RC) was constructed with cases from 2 hospitals in Shanghai. Discriminative ability, calibration and decision curve analysis were externally validated in 3 CPCC member hospitals. Of the 1,835 patients involved, PCa was identified in 338/924 (36.6%) and 294/911 (32.3%) men in the development and validation cohort, respectively. Multivariate logistic regression analyses showed that 5 predictors (age, logPSA, logPV, free PSA ratio, and digital rectal examination) were associated with PCa (Model 1) or high-grade PCa (Model 2), respectively. The area under the curve of Model 1 and Model 2 was 0.801 (95% CI: 0.771-0.831) and 0.826 (95% CI: 0.796-0.857), respectively. Both models illustrated good calibration and substantial improvement in decision curve analyses than any single predictors at all threshold probabilities. Higher predicting accuracy, better calibration, and greater clinical benefit were achieved by CPCC-RC, compared with European Randomized Study for Screening of Prostate Cancer Risk Calculator and Prostate Cancer Prevention Trial Risk Calculator in predicting PCa. CPCC-RC performed well in discrimination and calibration and decision curve analysis in external validation compared with Western risk calculators. CPCC-RC may aid in decision-making of prostate biopsy in Chinese or in other Asian populations with similar genetic and environmental backgrounds. Copyright © 2016 Elsevier Inc. All rights reserved.
Comparison of salivary collection and processing methods for quantitative HHV-8 detection.
Speicher, D J; Johnson, N W
2014-10-01
Saliva is a proved diagnostic fluid for the qualitative detection of infectious agents, but the accuracy of viral load determinations is unknown. Stabilising fluids impede nucleic acid degradation, compared with collection onto ice and then freezing, and we have shown that the DNA Genotek P-021 prototype kit (P-021) can produce high-quality DNA after 14 months of storage at room temperature. Here we evaluate the quantitative capability of 10 collection/processing methods. Unstimulated whole mouth fluid was spiked with a mixture of HHV-8 cloned constructs, 10-fold serial dilutions were produced, and samples were extracted and then examined with quantitative PCR (qPCR). Calibration curves were compared by linear regression and qPCR dynamics. All methods extracted with commercial spin columns produced linear calibration curves with large dynamic range and gave accurate viral loads. Ethanol precipitation of the P-021 does not produce a linear standard curve, and virus is lost in the cell pellet. DNA extractions from the P-021 using commercial spin columns produced linear standard curves with wide dynamic range and excellent limit of detection. When extracted with spin columns, the P-021 enables accurate viral loads down to 23 copies μl(-1) DNA. The quantitative and long-term storage capability of this system makes it ideal for study of salivary DNA viruses in resource-poor settings. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Effect of a Hypocretin/Orexin Antagonist on Neurocognitive Performance
2013-09-01
Calibration curves were constructed using Chromeleon 6.8.0 software (Dionex, Corp). Amino acids , glutamate and GABA were assayed using HPLC-EC. The...mobile phase consisted of 100 mM Na2HPO4, 22% MEOH, and 3.5% acetonitrile, pH 6.75 and set to a flow rate of 0.4 mL/min. The amino acids were detected...the basal forebrain are affected by ALM and ZOL, neurons that express choline acetyltransferase (ChAT), a marker for ACh, were scored for Fos co
Numerical simulations of flow fields through conventionally controlled wind turbines & wind farms
NASA Astrophysics Data System (ADS)
Emre Yilmaz, Ali; Meyers, Johan
2014-06-01
In the current study, an Actuator-Line Model (ALM) is implemented in our in-house pseudo-spectral LES solver SP-WIND, including a turbine controller. Below rated wind speed, turbines are controlled by a standard-torque-controller aiming at maximum power extraction from the wind. Above rated wind speed, the extracted power is limited by a blade pitch controller which is based on a proportional-integral type control algorithm. This model is used to perform a series of single turbine and wind farm simulations using the NREL 5MW turbine. First of all, we focus on below-rated wind speed, and investigate the effect of the farm layout on the controller calibration curves. These calibration curves are expressed in terms of nondimensional torque and rotational speed, using the mean turbine-disk velocity as reference. We show that this normalization leads to calibration curves that are independent of wind speed, but the calibration curves do depend on the farm layout, in particular for tightly spaced farms. Compared to turbines in a lone-standing set-up, turbines in a farm experience a different wind distribution over the rotor due to the farm boundary-layer interaction. We demonstrate this for fully developed wind-farm boundary layers with aligned turbine arrangements at different spacings (5D, 7D, 9D). Further we also compare calibration curves obtained from full farm simulations with calibration curves that can be obtained at a much lower cost using a minimal flow unit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimer, P J; Baillie, M L; Bard, E
2005-10-02
Radiocarbon calibration curves are essential for converting radiocarbon dated chronologies to the calendar timescale. Prior to the 1980's numerous differently derived calibration curves based on radiocarbon ages of known age material were in use, resulting in ''apples and oranges'' comparisons between various records (Klein et al., 1982), further complicated by until then unappreciated inter-laboratory variations (International Study Group, 1982). The solution was to produce an internationally-agreed calibration curve based on carefully screened data with updates at 4-6 year intervals (Klein et al., 1982; Stuiver and Reimer, 1986; Stuiver and Reimer, 1993; Stuiver et al., 1998). The IntCal working group hasmore » continued this tradition with the active participation of researchers who produced the records that were considered for incorporation into the current, internationally-ratified calibration curves, IntCal04, SHCal04, and Marine04, for Northern Hemisphere terrestrial, Southern Hemisphere terrestrial, and marine samples, respectively (Reimer et al., 2004; Hughen et al., 2004; McCormac et al., 2004). Fairbanks et al. (2005), accompanied by a more technical paper, Chiu et al. (2005), and an introductory comment, Adkins (2005), recently published a ''calibration curve spanning 0-50,000 years''. Fairbanks et al. (2005) and Chiu et al. (2005) have made a significant contribution to the database on which the IntCal04 and Marine04 calibration curves are based. These authors have now taken the further step to derive their own radiocarbon calibration extending to 50,000 cal BP, which they claim is superior to that generated by the IntCal working group. In their papers, these authors are strongly critical of the IntCal calibration efforts for what they claim to be inadequate screening and sample pretreatment methods. While these criticisms may ultimately be helpful in identifying a better set of protocols, we feel that there are also several erroneous and misleading statements made by these authors which require a response by the IntCal working group. Furthermore, we would like to comment on the sample selection criteria, pretreatment methods, and statistical methods utilized by Fairbanks et al. in derivation of their own radiocarbon calibration.« less
Hao, Z Q; Li, C M; Shen, M; Yang, X Y; Li, K H; Guo, L B; Li, X Y; Lu, Y F; Zeng, X Y
2015-03-23
Laser-induced breakdown spectroscopy (LIBS) with partial least squares regression (PLSR) has been applied to measuring the acidity of iron ore, which can be defined by the concentrations of oxides: CaO, MgO, Al₂O₃, and SiO₂. With the conventional internal standard calibration, it is difficult to establish the calibration curves of CaO, MgO, Al₂O₃, and SiO₂ in iron ore due to the serious matrix effects. PLSR is effective to address this problem due to its excellent performance in compensating the matrix effects. In this work, fifty samples were used to construct the PLSR calibration models for the above-mentioned oxides. These calibration models were validated by the 10-fold cross-validation method with the minimum root-mean-square errors (RMSE). Another ten samples were used as a test set. The acidities were calculated according to the estimated concentrations of CaO, MgO, Al₂O₃, and SiO₂ using the PLSR models. The average relative error (ARE) and RMSE of the acidity achieved 3.65% and 0.0048, respectively, for the test samples.
Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric
2010-04-01
The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.
Optical Rotation Curves and Linewidths for Tully-Fisher Applications
NASA Astrophysics Data System (ADS)
Courteau, Stephane
1997-12-01
We present optical long-slit rotation curves for 304 northern Sb-Sc UGC galaxies from a sample designed for Tully-Fisher (TF) applications. Matching r-band photometry exists for each galaxy. We describe the procedures of rotation curve (RC) extraction and construction of optical profiles analogous to 21 cm integrated linewidths. More than 20% of the galaxies were observed twice or more, allowing for a proper determination of systematic errors. Various measures of maximum rotational velocity to be used as input in the TF relation are tested on the basis of their repeatability, minimization of TF scatter, and match with 21 cm linewidths. The best measure of TF velocity, V2.2 is given at the location of peak rotational velocity of a pure exponential disk. An alternative measure to V2.2 which makes no assumption about the luminosity profile or shape of the rotation curve is Vhist, the 20% width of the velocity histogram, though the match with 21 cm linewidths is not as good. We show that optical TF calibrations yield internal scatter comparable to, if not smaller than, the best calibrations based on single-dish 21 cm radio linewidths. Even though resolved H I RCs are more extended than their optical counterpart, a tight match between optical and radio linewidths exists since the bulk of the H I surface density is enclosed within the optical radius. We model the 304 RCs presented here plus a sample of 958 curves from Mathewson et al. (1992, APJS, 81, 413) with various fitting functions. An arctan function provides an adequate simple fit (not accounting for non-circular motions and spiral arms). More elaborate, empirical models may yield a better match at the expense of strong covariances. We caution against physical or "universal" parametrizations for TF applications.
Sezer, Banu; Velioglu, Hasan Murat; Bilge, Gonca; Berkkan, Aysel; Ozdinc, Nese; Tamer, Ugur; Boyaci, Ismail Hakkı
2018-01-01
The use of Li salts in foods has been prohibited due to their negative effects on central nervous system; however, they might still be used especially in meat products as Na substitutes. Lithium can be toxic and even lethal at higher concentrations and it is not approved in foods. The present study focuses on Li analysis in meatballs by using laser induced breakdown spectroscopy (LIBS). Meatball samples were analyzed using LIBS and flame atomic absorption spectroscopy. Calibration curves were obtained by utilizing Li emission lines at 610nm and 670nm for univariate calibration. The results showed that Li calibration curve at 670nm provided successful determination of Li with 0.965 of R 2 and 4.64ppm of limit of detection (LOD) value. While Li Calibration curve obtained using emission line at 610nm generated R 2 of 0.991 and LOD of 22.6ppm, calibration curve obtained at 670nm below 1300ppm generated R 2 of 0.965 and LOD of 4.64ppm. Copyright © 2017. Published by Elsevier Ltd.
Pilkonis, Paul A.; Choi, Seung W.; Reise, Steven P.; Stover, Angela M.; Riley, William T.; Cella, David
2011-01-01
The authors report on the development and calibration of item banks for depression, anxiety, and anger as part of the Patient-Reported Outcomes Measurement Information System (PROMIS®). Comprehensive literature searches yielded an initial bank of 1,404 items from 305 instruments. After qualitative item analysis (including focus groups and cognitive interviewing), 168 items (56 for each construct) were written in a first person, past tense format with a 7-day time frame and five response options reflecting frequency. The calibration sample included nearly 15,000 respondents. Final banks of 28, 29, and 29 items were calibrated for depression, anxiety, and anger, respectively, using item response theory. Test information curves showed that the PROMIS item banks provided more information than conventional measures in a range of severity from approximately −1 to +3 standard deviations (with higher scores indicating greater distress). Short forms consisting of seven to eight items provided information comparable to legacy measures containing more items. PMID:21697139
Pilkonis, Paul A; Choi, Seung W; Reise, Steven P; Stover, Angela M; Riley, William T; Cella, David
2011-09-01
The authors report on the development and calibration of item banks for depression, anxiety, and anger as part of the Patient-Reported Outcomes Measurement Information System (PROMIS®). Comprehensive literature searches yielded an initial bank of 1,404 items from 305 instruments. After qualitative item analysis (including focus groups and cognitive interviewing), 168 items (56 for each construct) were written in a first person, past tense format with a 7-day time frame and five response options reflecting frequency. The calibration sample included nearly 15,000 respondents. Final banks of 28, 29, and 29 items were calibrated for depression, anxiety, and anger, respectively, using item response theory. Test information curves showed that the PROMIS item banks provided more information than conventional measures in a range of severity from approximately -1 to +3 standard deviations (with higher scores indicating greater distress). Short forms consisting of seven to eight items provided information comparable to legacy measures containing more items.
Schühle, U; Curdt, W; Hollandt, J; Feldman, U; Lemaire, P; Wilhelm, K
2000-01-20
The Solar Ultraviolet Measurement of Emitted Radiation (SUMER) vacuum-ultraviolet spectrograph was calibrated in the laboratory before the integration of the instrument on the Solar and Heliospheric Observatory (SOHO) spacecraft in 1995. During the scientific operation of the SOHO it has been possible to track the radiometric calibration of the SUMER spectrograph since March 1996 by a strategy that employs various methods to update the calibration status and improve the coverage of the spectral calibration curve. The results for the A Detector were published previously [Appl. Opt. 36, 6416 (1997)]. During three years of operation in space, the B detector was used for two and one-half years. We describe the characteristics of the B detector and present results of the tracking and refinement of the spectral calibration curves with it. Observations of the spectra of the stars alpha and rho Leonis permit an extrapolation of the calibration curves in the range from 125 to 149.0 nm. Using a solar coronal spectrum observed above the solar disk, we can extrapolate the calibration curves by measuring emission line pairs with well-known intensity ratios. The sensitivity ratio of the two photocathode areas can be obtained by registration of many emission lines in the entire spectral range on both KBr-coated and bare parts of the detector's active surface. The results are found to be consistent with the published calibration performed in the laboratory in the wavelength range from 53 to 124 nm. We can extrapolate the calibration outside this range to 147 nm with a relative uncertainty of ?30% (1varsigma) for wavelengths longer than 125 nm and to 46.5 nm with 50% uncertainty for the short-wavelength range below 53 nm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J; Li, X; Liu, G
Purpose: We compare and investigate the dosimetric impacts on pencil beam scanning (PBS) proton treatment plans generated with CT calibration curves from four different CT scanners and one averaged ‘global’ CT calibration curve. Methods: The four CT scanners are located at three different hospital locations within the same health system. CT density calibration curves were collected from these scanners using the same CT calibration phantom and acquisition parameters. Mass density to HU value tables were then commissioned in a commercial treatment planning system. Five disease sites were chosen for dosimetric comparisons at brain, lung, head and neck, adrenal, and prostate.more » Three types of PBS plans were generated at each treatment site using SFUD, IMPT, and robustness optimized IMPT techniques. 3D dose differences were investigated using 3D Gamma analysis. Results: The CT calibration curves for all four scanners display very similar shapes. Large HU differences were observed at both the high HU and low HU regions of the curves. Large dose differences were generally observed at the distal edges of the beams and they are beam angle dependent. Out of the five treatment sites, lung plans exhibits the most overall range uncertainties and prostate plans have the greatest dose discrepancy. There are no significant differences between the SFUD, IMPT, and the RO-IMPT methods. 3D gamma analysis with 3%, 3 mm criteria showed all plans with greater than 95% passing rate. Two of the scanners with close HU values have negligible dose difference except for lung. Conclusion: Our study shows that there are more than 5% dosimetric differences between different CT calibration curves. PBS treatment plans generated with SFUD, IMPT, and the robustness optimized IMPT has similar sensitivity to the CT density uncertainty. More patient data and tighter gamma criteria based on structure location and size will be used for further investigation.« less
Radke, Wolfgang
2004-03-05
Simulations of the distribution coefficients of linear polymers and regular combs with various spacings between the arms have been performed. The distribution coefficients were plotted as a function of the number of segments in order to compare the size exclusion chromatography (SEC)-elution behavior of combs relative to linear molecules. By comparing the simulated SEC-calibration curves it is possible to predict the elution behavior of comb-shaped polymers relative to linear ones. In order to compare the results obtained by computer simulations with experimental data, a variety of comb-shaped polymers varying in side chain length, spacing between the side chains and molecular weights of the backbone were analyzed by SEC with light-scattering detection. It was found that the computer simulations could predict the molecular weights of linear molecules having the same retention volume with an accuracy of about 10%, i.e. the error in the molecular weight obtained by calculating the molecular weight of the comb-polymer based on a calibration curve constructed using linear standards and the results of the computer simulations are of the same magnitude as the experimental error of absolute molecular weight determination.
Use of armored RNA as a standard to construct a calibration curve for real-time RT-PCR.
Donia, D; Divizia, M; Pana', A
2005-06-01
Armored Enterovirus RNA was used to standardize a real-time reverse transcription (RT)-PCR for environmental testing. Armored technology is a system to produce a robust and stable RNA standard, trapped into phage proteins, to be used as internal control. The Armored Enterovirus RNA protected sequence includes 263 bp of highly conserved sequences in 5' UTR region. During these tests, Armored RNA has been used to produce a calibration curve, comparing three different fluorogenic chemistry: TaqMan system, Syber Green I and Lux-primers. The effective evaluation of three amplifying commercial reagent kits, in use to carry out real-time RT-PCR, and several extraction procedures of protected viral RNA have been carried out. The highest Armored RNA recovery was obtained by heat treatment while chemical extraction may decrease the quantity of RNA. The best sensitivity and specificity was obtained using the Syber Green I technique since it is a reproducible test, easy to use and the cheapest one. TaqMan and Lux-primer assays provide good RT-PCR efficiency in relationship to the several extraction methods used, since labelled probe or primer request in these chemistry strategies, increases the cost of testing.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
GIADA: extended calibration activity: . the Electrostatic Micromanipulator
NASA Astrophysics Data System (ADS)
Sordini, R.; Accolla, M.; Della Corte, V.; Rotundi, A.
GIADA (Grain Impact Analyser and Dust Accumulator), one of the scientific instruments onboard Rosetta/ESA space mission, is devoted to study dynamical properties of dust particles ejected by the short period comet 67P/Churyumov-Gerasimenko. In preparation for the scientific phase of the mission, we are performing laboratory calibration activities on the GIADA Proto Flight Model (PFM), housed in a clean room in our laboratory. Aim of the calibration activity is to characterize the response curve of the GIADA measurement sub-systems. These curves are then correlated with the calibration curves obtained for the GIADA payload onboard the Rosetta S/C. The calibration activity involves two of three sub-systems constituting GIADA: Grain Detection System (GDS) and Impact Sensor (IS). To get reliable calibration curves, a statistically relevant number of grains have to be dropped or shot into the GIADA instrument. Particle composition, structure, size, optical properties and porosity have been selected in order to obtain realistic cometary dust analogues. For each selected type of grain, we estimated that at least one hundred of shots are needed to obtain a calibration curve. In order to manipulate such a large number of particles, we have designed and developed an innovative electrostatic system able to capture, manipulate and shoot particles with sizes in the range 20 - 500 μm. The electrostatic Micromanipulator (EM) is installed on a manual handling system composed by X-Y-Z micrometric slides with a 360o rotational stage along Z, and mounted on a optical bench. In the present work, we display the tests on EM using ten different materials with dimension in the range 50 - 500 μm: the experimental results are in compliance with the requirements.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
Effects of experimental design on calibration curve precision in routine analysis
Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.
1998-01-01
A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816
Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E
2014-09-16
A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.
INFLUENCE OF IRON CHELATION ON R1 AND R2 CALIBRATION CURVES IN GERBIL LIVER AND HEART
Wood, John C.; Aguilar, Michelle; Otto-Duessel, Maya; Nick, Hanspeter; Nelson, Marvin D.; Moats, Rex
2008-01-01
MRI is gaining increasing importance for the noninvasive quantification of organ iron burden. Since transverse relaxation rates depend on iron distribution as well as iron concentration, physiologic and pharmacologic processes that alter iron distribution could change MRI calibration curves. This paper compares the effect of three iron chelators, deferoxamine, deferiprone, and deferasirox on R1 and R2 calibration curves according to two iron loading and chelation strategies. 33 Mongolian gerbils underwent iron loading (iron dextran 500 mg/kg/wk) for 3 weeks followed by 4 weeks of chelation. An additional 56 animals received less aggressive loading (200 mg/kg/week) for 10 weeks, followed by 12 weeks of chelation. R1 and R2 calibration curves were compared to results from 23 iron-loaded animals that had not received chelation. Acute iron loading and chelation biased R1 and R2 from the unchelated reference calibration curves but chelator-specific changes were not observed, suggesting physiologic rather than pharmacologic differences in iron distribution. Long term chelation deferiprone treatment increased liver R1 50% (p<0.01), while long term deferasirox lowered liver R2 30.9% (p<0.0001). The relationship between R1 and R2 and organ iron concentration may depend upon the acuity of iron loading and unloading as well as the iron chelator administered. PMID:18581418
Quantification of calcium using localized normalization on laser-induced breakdown spectroscopy data
NASA Astrophysics Data System (ADS)
Sabri, Nursalwanie Mohd; Haider, Zuhaib; Tufail, Kashif; Aziz, Safwan; Ali, Jalil; Wahab, Zaidan Abdul; Abbas, Zulkifly
2017-03-01
This paper focuses on localized normalization for improved calibration curves in laser-induced breakdown spectroscopy (LIBS) measurements. The calibration curves have been obtained using five samples consisting of different concentrations of calcium (Ca) in potassium bromide (KBr) matrix. The work has utilized Q-switched Nd:YAG laser installed in LIBS2500plus system with fundamental wavelength and laser energy of 650 mJ. Optimization of gate delay can be obtained from signal-to-background ratio (SBR) of Ca II 315.9 and 317.9 nm. The optimum conditions are determined in which having high spectral intensity and SBR. The highest spectral lines of ionic and emission lines of Ca at gate delay of 0.83 µs. From SBR, the optimized gate delay is at 5.42 µs for both Ca II spectral lines. Calibration curves consist of three parts; original intensity from LIBS experimentation, normalization and localized normalization of the spectral line intensity. The R2 values of the calibration curves plotted using locally normalized intensities of Ca I 610.3, 612.2 and 616.2 nm spectral lines are 0.96329, 0.97042, and 0.96131, respectively. The enhancement from calibration curves using the regression coefficient allows more accurate analysis in LIBS. At the request of all authors of the paper, and with the agreement of the Proceedings Editor, an updated version of this article was published on 24 May 2017.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Chirila, Madalina M; Sarkisian, Khachatur; Andrew, Michael E; Kwon, Cheol-Woong; Rando, Roy J; Harper, Martin
2015-04-01
The current measurement method for occupational exposure to wood dust is by gravimetric analysis and is thus non-specific. In this work, diffuse reflection infrared Fourier transform spectroscopy (DRIFTS) for the analysis of only the wood component of dust was further evaluated by analysis of the same samples between two laboratories. Field samples were collected from six wood product factories using 25-mm glass fiber filters with the Button aerosol sampler. Gravimetric mass was determined in one laboratory by weighing the filters before and after aerosol collection. Diffuse reflection mid-infrared spectra were obtained from the wood dust on the filter which is placed on a motorized stage inside the spectrometer. The metric used for the DRIFTS analysis was the intensity of the carbonyl band in cellulose and hemicellulose at ~1735 cm(-1). Calibration curves were constructed separately in both laboratories using the same sets of prepared filters from the inhalable sampling fraction of red oak, southern yellow pine, and western red cedar in the range of 0.125-4 mg of wood dust. Using the same procedure in both laboratories to build the calibration curve and analyze the field samples, 62.3% of the samples measured within 25% of the average result with a mean difference between the laboratories of 18.5%. Some observations are included as to how the calibration and analysis can be improved. In particular, determining the wood type on each sample to allow matching to the most appropriate calibration increases the apparent proportion of wood dust in the sample and this likely provides more realistic DRIFTS results. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...
Increasing the sensitivity of the Jaffe reaction for creatinine
NASA Technical Reports Server (NTRS)
Tom, H. Y.
1973-01-01
Study of analytical procedure has revealed that linearity of creatinine calibration curve can be extended by using 0.03 molar picric acid solution made up in 70 percent ethanol instead of water. Three to five times more creatinine concentration can be encompassed within linear portion of calibration curve.
Carbon-14 wiggle-match dating of peat deposits: advantages and limitations
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; van Geel, Bas; Mauquoy, Dmitri; van der Plicht, Johannes
2004-02-01
Carbon-14 wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a series of closely spaced peat 14C dates with the 14C calibration curve. The method of WMD is discussed, and its advantages and limitations are compared with calibration of individual dates. A numerical approach to WMD is introduced that makes it possible to assess the precision of WMD chronologies. During several intervals of the Holocene, the 14C calibration curve shows less pronounced fluctuations. We assess whether wiggle-matching is also a feasible strategy for these parts of the 14C calibration curve. High-precision chronologies, such as obtainable with WMD, are needed for studies of rapid climate changes and their possible causes during the Holocene. Copyright
Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system
NASA Astrophysics Data System (ADS)
Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan
2010-02-01
The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.
Assessment of opacimeter calibration according to International Standard Organization 10155.
Gomes, J F
2001-01-01
This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.
40 CFR 90.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... the form of the following equation (1) or (2). Include zero as a data point. Compensation for known...
Laser projection positioning of spatial contour curves via a galvanometric scanner
NASA Astrophysics Data System (ADS)
Tu, Junchao; Zhang, Liyan
2018-04-01
The technology of laser projection positioning is widely applied in advanced manufacturing fields (e.g. composite plying, parts location and installation). In order to use it better, a laser projection positioning (LPP) system is designed and implemented. Firstly, the LPP system is built by a laser galvanometric scanning (LGS) system and a binocular vision system. Applying Single-hidden Layer Feed-forward Neural Network (SLFN), the system model is constructed next. Secondly, the LGS system and the binocular system, which are respectively independent, are integrated through a datadriven calibration method based on extreme learning machine (ELM) algorithm. Finally, a projection positioning method is proposed within the framework of the calibrated SLFN system model. A well-designed experiment is conducted to verify the viability and effectiveness of the proposed system. In addition, the accuracy of projection positioning are evaluated to show that the LPP system can achieves the good localization effect.
A short static-pressure probe design for supersonic flow
NASA Technical Reports Server (NTRS)
Pinckney, S. Z.
1975-01-01
A static-pressure probe design concept was developed which has the static holes located close to the probe tip and is relatively insensitive to probe angle of attack and circumferential static hole location. Probes were constructed with 10 and 20 deg half-angle cone tips followed by a tangent conic curve section and a tangent cone section of 2, 3, or 3.5 deg, and were tested at Mach numbers of 2.5 and 4.0 and angles of attack up to 12 deg. Experimental results indicate that for stream Mach numbers of 2.5 and 4.0 and probe angle of attack within + or - 10 deg, values of stream static pressure can be determined from probe calibration to within about + or - 4 percent. If the probe is aligned within about 7 deg of the flow experimental results indicated, the stream static pressures can be determined to within 2 percent from probe calibration.
Seon, C R; Hong, J H; Jang, J; Lee, S H; Choe, W; Lee, H H; Cheon, M S; Pak, S; Lee, H G; Biel, W; Barnsley, R
2014-11-01
To optimize the design of ITER vacuum ultraviolet (VUV) spectrometer, a prototype VUV spectrometer was developed. The sensitivity calibration curve of the spectrometer was calculated from the mirror reflectivity, the grating efficiency, and the detector efficiency. The calibration curve was consistent with the calibration points derived in the experiment using the calibrated hollow cathode lamp. For the application of the prototype ITER VUV spectrometer, the prototype spectrometer was installed at KSTAR, and various impurity emission lines could be measured. By analyzing about 100 shots, strong positive correlation between the O VI and the C IV emission intensities could be found.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Linking Item Parameters to a Base Scale. ACT Research Report Series, 2009-2
ERIC Educational Resources Information Center
Kang, Taehoon; Petersen, Nancy S.
2009-01-01
This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord (1983) characteristic curve method…
Molar mass characterization of sodium carboxymethyl cellulose by SEC-MALLS.
Shakun, Maryia; Maier, Helena; Heinze, Thomas; Kilz, Peter; Radke, Wolfgang
2013-06-05
Two series of sodium carboxymethyl celluloses (NaCMCs) derived from microcrystalline cellulose (Avicel samples) and cotton linters (BWL samples) with average degrees of substitution (DS) ranging from DS=0.45 to DS=1.55 were characterized by size exclusion chromatography with multi-angle laser light scattering detection (SEC-MALLS) in 100 mmol/L aqueous ammonium acetate (NH4OAc) as vaporizable eluent system. The application of vaporizable NH4OAc allows future use of the eluent system in two-dimensional separations employing evaporative light scattering detection (ELSD). The losses of samples during filtration and during the chromatographic experiment were determined. The scaling exponent as of the relation [Formula: see text] was approx. 0.61, showing that NaCMCs exhibit an expanded coil conformation in solution. No systematic dependencies of as on DS were observed. The dependences of molar mass on SEC-elution volume for samples of different DS can be well described by a common calibration curve, which is of advantage, as it allows the determination of molar masses of unknown samples by using the same calibration curve, irrespective of the DS of the NaCMC sample. Since no commercial NaCMC standards are available, correction factors were determined allowing converting a pullulan based calibration curve into a NaCMC calibration using the broad calibration approach. The weight average molar masses derived using the so established calibration curve closely agree with the ones determined by light scattering, proving the accuracy of the correction factors determined. Copyright © 2013 Elsevier Ltd. All rights reserved.
Calibrating photometric redshifts of luminous red galaxies
Padmanabhan, Nikhil; Budavari, Tamas; Schlegel, David J.; ...
2005-05-01
We discuss the construction of a photometric redshift catalogue of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS), emphasizing the principal steps necessary for constructing such a catalogue: (i) photometrically selecting the sample, (ii) measuring photometric redshifts and their error distributions, and (iii) estimating the true redshift distribution. We compare two photometric redshift algorithms for these data and find that they give comparable results. Calibrating against the SDSS and SDSS–2dF (Two Degree Field) spectroscopic surveys, we find that the photometric redshift accuracy is σ~ 0.03 for redshifts less than 0.55 and worsens at higher redshift (~ 0.06more » for z < 0.7). These errors are caused by photometric scatter, as well as systematic errors in the templates, filter curves and photometric zero-points. We also parametrize the photometric redshift error distribution with a sum of Gaussians and use this model to deconvolve the errors from the measured photometric redshift distribution to estimate the true redshift distribution. We pay special attention to the stability of this deconvolution, regularizing the method with a prior on the smoothness of the true redshift distribution. The methods that we develop are applicable to general photometric redshift surveys.« less
A rapid tool for determination of titanium dioxide content in white chickpea samples.
Sezer, Banu; Bilge, Gonca; Berkkan, Aysel; Tamer, Ugur; Hakki Boyaci, Ismail
2018-02-01
Titanium dioxide (TiO 2 ) is a widely used additive in foods. However, in the scientific community there is an ongoing debate on health concerns about TiO 2 . The main goal of this study is to determine TiO 2 content by using laser induced breakdown spectroscopy (LIBS). To this end, different amounts of TiO 2 was added to white chickpeas and analyzed by using LIBS. Calibration curve was obtained by following Ti emissions at 390.11nm for univariate calibration, and partial least square (PLS) calibration curve was obtained by evaluating the whole spectra. The results showed that Ti calibration curve at 390.11nm provides successful determination of Ti level with 0.985 of R 2 and 33.9ppm of limit of detection (LOD) value, while PLS has 0.989 of R 2 and 60.9ppm of LOD. Furthermore, commercial white chickpea samples were used to validate the method, and validation R 2 for simple calibration and PLS were calculated as 0.989 and 0.951, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kurasawa, Shintaro; Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun
2017-11-23
This paper describes and verifies a non-invasive blood glucose measurement method using a fiber Bragg grating (FBG) sensor system. The FBG sensor is installed on the radial artery, and the strain (pulse wave) that is propagated from the heartbeat is measured. The measured pulse wave signal was used as a collection of feature vectors for multivariate analysis aiming to determine the blood glucose level. The time axis of the pulse wave signal was normalized by two signal processing methods: the shortest-time-cut process and 1-s-normalization process. The measurement accuracy of the calculated blood glucose level was compared with the accuracy of these signal processing methods. It was impossible to calculate a blood glucose level exceeding 200 mg/dL in the calibration curve that was constructed by the shortest-time-cut process. In the 1-s-normalization process, the measurement accuracy of the blood glucose level was improved, and a blood glucose level exceeding 200 mg/dL could be calculated. By verifying the loading vector of each calibration curve to calculate the blood glucose level with a high measurement accuracy, we found the gradient of the peak of the pulse wave at the acceleration plethysmogram greatly affected.
NASA Astrophysics Data System (ADS)
Zananiri, I.; Batt, C. M.; Lanos, Ph.; Tarling, D. H.; Linford, P.
2007-02-01
This paper examines the limitations and deficiencies of the current British archaeomagnetic calibration curve and applies several mathematical approaches in an attempt to produce an improved secular variation curve for the UK for use in archaeomagnetic dating. The dataset compiled is the most complete available in the UK, incorporating published results, PhD theses and unpublished laboratory reports. It comprises 620 archaeomagnetic (directional) data and 238 direct observations of the geomagnetic field, and includes all relevant information available about the site, the archaeomagnetic direction and the archaeological age. A thorough examination of the data was performed to assess their quality and reliability. Various techniques were employed in order to use the data to construct a secular variation (SV) record: moving window with averaging and median, as well as Bayesian statistical modelling. The SV reference curve obtained for the past 4000 years is very similar to that from France, most differences occurring during the early medieval period (or Dark Ages). Two examples of dating of archaeological structures, medieval and pre-Roman, are presented based on the new SV curve for the UK and the implications for archaeomagnetic dating are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Z; Reyhan, M; Huang, Q
Purpose: The calibration of the Hounsfield units (HU) to relative proton stopping powers (RSP) is a crucial component in assuring the accurate delivery of proton therapy dose distributions to patients. The purpose of this work is to assess the uncertainty of CT calibration considering the impact of CT slice thickness, position of the plug within the phantom and phantom sizes. Methods: Stoichiometric calibration method was employed to develop the CT calibration curve. Gammex 467 tissue characterization phantom was scanned in Tomotherapy Cheese phantom and Gammex 451 phantom by using a GE CT scanner. Each plug was individually inserted into themore » same position of inner and outer ring of phantoms at each time, respectively. 1.25 mm and 2.5 mm slice thickness were used. Other parameters were same. Results: HU of selected human tissues were calculated based on fitted coefficient (Kph, Kcoh and KKN), and RSP were calculated according to the Bethe-Bloch equation. The calibration curve was obtained by fitting cheese phantom data with 1.25 mm thickness. There is no significant difference if the slice thickness, phantom size, position of plug changed in soft tissue. For boney structure, RSP increases up to 1% if the phantom size and the position of plug changed but keep the slice thickness the same. However, if the slice thickness varied from the one in the calibration curve, 0.5%–3% deviation would be expected depending on the plug position. The Inner position shows the obvious deviation (averagely about 2.5%). Conclusion: RSP shows a clinical insignificant deviation in soft tissue region. Special attention may be required when using a different slice thickness from the calibration curve for boney structure. It is clinically practical to address 3% deviation due to different thickness in the definition of clinical margins.« less
Eslamizad, Samira; Yazdanpanah, Hassan; Javidnia, Katayon; Sadeghi, Ramezan; Bayat, Mitra; Shahabipour, Sara; Khalighian, Najmeh; Kobarfard, Farzad
2016-01-01
A fast and simple modified QuEChERS (quick, easy, cheap, rugged and safe) extraction method based on spiked calibration curves and direct sample introduction was developed for determination of Benzo [a] pyrene (BaP) in bread by gas chromatography-mass spectrometry single quadrupole selected ion monitoring (GC/MS-SQ-SIM). Sample preparation includes: extraction of BaP into acetone followed by cleanup with dispersive solid phase extraction. The use of spiked samples for constructing the calibration curve substantially reduced adverse matrix-related effects. The average recovery of BaP at 6 concentration levels was in range of 95-120%. The method was proved to be reproducible with relative standard deviation less than 14.5% for all of the concentration levels. The limit of detection and limit of quantification were 0.3 ng/g and 0.5 ng/g, respectively. Correlation coefficient of 0.997 was obtained for spiked calibration standards over the concentration range of 0.5-20 ng/g. To the best of our knowledge, this is the first time that a QuEChERS method is used for the analysis of BaP in breads. The developed method was used for determination of BaP in 29 traditional (Sangak) and industrial (Senan) bread samples collected from Tehran in 2014. These results showed that two Sangak samples were contaminated with BaP. Therefore, a comprehensive survey for monitoring of BaP in Sangak bread samples seems to be needed. This is the first report concerning contamination of bread samples with BaP in Iran. PMID:27642317
Skuratovsky, Aleksander; Soto, Robert J; Porter, Marc D
2018-06-19
This paper presents a method for immunometric biomarker quantitation that uses standard flow-through assay reagents and obviates the need for constructing a calibration curve. The approach relies on a nitrocellulose immunoassay substrate with multiple physical addresses for analyte capture, each modified with different amounts of an analyte-specific capture antibody. As such, each address generates a distinctly different readout signal that is proportional to the analyte concentration in the sample. To establish the feasibility of this concept, equations derived from antibody-antigen binding equilibrium were first applied in modeling experiments. Next, nitrocellulose membranes with multiple capture antibody addresses were fabricated for detection of a model analyte, human Immunoglobulin G (hIgG), by a heterogeneous sandwich immunoassay using antibody-modified gold nanoparticles (AuNPs) as the immunolabel. Counting the number of colored capture addresses visible to the unassisted eye enabled semiquantitative hIgG determination. We then demonstrated that, by leveraging the localized surface plasmon resonance of the AuNPs, surface-enhanced Raman spectroscopy (SERS) can be used for quantitative readout. By comparing the SERS signal intensities from each capture address with values predicted using immunoassay equilibrium theory, the concentration of hIgG can be determined (∼30% average absolute deviation) without reference to a calibration curve. This work also demonstrates the ability to manipulate the dynamic range of the assay over ∼4 orders of magnitude (from 2 ng mL -1 to 10 μg mL -1 ). The potential prospects in applying this concept to point-of-need diagnostics are also discussed.
Powder X-ray diffraction method for the quantification of cocrystals in the crystallization mixture.
Padrela, Luis; de Azevedo, Edmundo Gomes; Velaga, Sitaram P
2012-08-01
The solid state purity of cocrystals critically affects their performance. Thus, it is important to accurately quantify the purity of cocrystals in the final crystallization product. The aim of this study was to develop a powder X-ray diffraction (PXRD) quantification method for investigating the purity of cocrystals. The method developed was employed to study the formation of indomethacin-saccharin (IND-SAC) cocrystals by mechanochemical methods. Pure IND-SAC cocrystals were geometrically mixed with 1:1 w/w mixture of indomethacin/saccharin in various proportions. An accurately measured amount (550 mg) of the mixture was used for the PXRD measurements. The most intense, non-overlapping, characteristic diffraction peak of IND-SAC was used to construct the calibration curve in the range 0-100% (w/w). This calibration model was validated and used to monitor the formation of IND-SAC cocrystals by liquid-assisted grinding (LAG). The IND-SAC cocrystal calibration curve showed excellent linearity (R(2) = 0.9996) over the entire concentration range, displaying limit of detection (LOD) and limit of quantification (LOQ) values of 1.23% (w/w) and 3.74% (w/w), respectively. Validation results showed excellent correlations between actual and predicted concentrations of IND-SAC cocrystals (R(2) = 0.9981). The accuracy and reliability of the PXRD quantification method depend on the methods of sample preparation and handling. The crystallinity of the IND-SAC cocrystals was higher when larger amounts of methanol were used in the LAG method. The PXRD quantification method is suitable and reliable for verifying the purity of cocrystals in the final crystallization product.
Calibration of streamflow gauging stations at the Tenderfoot Creek Experimental Forest
Scott W. Woods
2007-01-01
We used tracer based methods to calibrate eleven streamflow gauging stations at the Tenderfoot Creek Experimental Forest in western Montana. At six of the stations the measured flows were consistent with the existing rating curves. At Lower and Upper Stringer Creek, Upper Sun Creek and Upper Tenderfoot Creek the published flows, based on the existing rating curves,...
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
Li, Xiaochuan; Bai, Xuedong; Wu, Yaohong; Ruan, Dike
2016-03-15
To construct and validate a model to predict responsible nerve roots in lumbar degenerative disease with diagnostic doubt (DD). From January 2009-January 2013, 163 patients with DD were assigned to the construction (n = 106) or validation sample (n = 57) according to different admission times to hospital. Outcome was assessed according to the Japanese Orthopedic Association (JOA) recovery rate as excellent, good, fair, and poor. The first two results were considered as effective clinical outcome (ECO). Baseline patient and clinical characteristics were considered as secondary variables. A multivariate logistic regression model was used to construct a model with the ECO as a dependent variable and other factors as explanatory variables. The odds ratios (ORs) of each risk factor were adjusted and transformed into a scoring system. Area under the curve (AUC) was calculated and validated in both internal and external samples. Moreover, calibration plot and predictive ability of this scoring system were also tested for further validation. Patients with DD with ECOs in both construction and validation models were around 76 % (76.4 and 75.5 % respectively). more preoperative visual analog pain scale (VAS) score (OR = 1.56, p < 0.01), stenosis levels of L4/5 or L5/S1 (OR = 1.44, p = 0.04), stenosis locations with neuroforamen (OR = 1.95, p = 0.01), neurological deficit (OR = 1.62, p = 0.01), and more VAS improvement of selective nerve route block (SNRB) (OR = 3.42, p = 0.02). the internal area under the curve (AUC) was 0.85, and the external AUC was 0.72, with a good calibration plot of prediction accuracy. Besides, the predictive ability of ECOs was not different from the actual results (p = 0.532). We have constructed and validated a predictive model for confirming responsible nerve roots in patients with DD. The associated risk factors were preoperative VAS score, stenosis levels of L4/5 or L5/S1, stenosis locations with neuroforamen, neurological deficit, and VAS improvement of SNRB. A tool such as this is beneficial in the preoperative counseling of patients, shared surgical decision making, and ultimately improving safety in spine surgery.
ESR/Alanine gamma-dosimetry in the 10-30 Gy range.
Fainstein, C; Winkler, E; Saravi, M
2000-05-01
We report Alanine Dosimeter preparation, procedures for using the ESR/Dosimetry method, and the resulting calibration curve for gamma-irradiation in the range from 10-30 Gy. We use calibration curve to measure the irradiation dose in gamma-irradiation of human blood, as required in Blood Transfusion Therapy. The ESR/Alanine results are compared against those obtained using the thermoluminescent dosimetry (TLD) method.
Errors introduced by dose scaling for relative dosimetry
Watanabe, Yoichi; Hayashi, Naoki
2012-01-01
Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658
Roelen, Corné A M; van Rhenen, Willem; Groothoff, Johan W; van der Klink, Jac J L; Twisk, Jos W R; Heymans, Martijn W
2014-07-01
Work ability predicts future disability pension (DP). A single-item work ability score (WAS) is emerging as a measure for work ability. This study compared single-item WAS with the multi-item work ability index (WAI) in its ability to identify workers at risk of DP. This prospective cohort study comprised 11 537 male construction workers, who completed the WAI at baseline and reported DP after a mean 2.3 years of follow-up. WAS and WAI were calibrated for DP risk predictions with the Hosmer-Lemeshow (H-L) test and their ability to discriminate between high- and low-risk construction workers was investigated with the area under the receiver operating characteristic curve (AUC). At follow-up, 336 (3%) construction workers reported DP. Both WAS [odds ratio (OR) 0.72, 95% confidence interval (95% CI) 0.66-0.78] and WAI (OR 0.57, 95% CI 0.52-0.63) scores were associated with DP at follow-up. The WAS showed miscalibration (H-L model χ (�)=10.60; df=3; P=0.01) and poorly discriminated between high- and low-risk construction workers (AUC 0.67, 95% CI 0.64-0.70). In contrast, calibration (H-L model χ �=8.20; df=8; P=0.41) and discrimination (AUC 0.78, 95% CI 0.75-0.80) were both adequate for the WAI. Although associated with the risk of future DP, the single-item WAS poorly identified male construction workers at risk of DP. We recommend using the multi-item WAI to screen for risk of DP in occupational health practice.
Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor
NASA Astrophysics Data System (ADS)
Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.
2018-01-01
Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.
2007-09-01
Calibration curves for CT number ( Hounsfield unit )s vs. mineral density (g /c c...12 3 Figure 3.4. Calibration curves for CT number ( Hounsfield units ) vs. apparent density (g /c c...named Hounsfield units (HU) after Sir Godfrey Hounsfield . The CT number is K([i- iw]/pw), where K = a magnifying constant, which depends on the make of CT
Fu, J; Li, L; Yang, X Q; Zhu, M J
2011-01-01
Leucine carboxypeptidase (EC 3.4.16) activity in Actinomucor elegans bran koji was investigated via absorbance at 507 nm after stained by Cd-nihydrin solution, with calibration curve A, which was made by a set of known concentration standard leucine, calibration B, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations inactive crude enzyme extract, and calibration C, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations crude enzyme extract. The results indicated that application of pure amino acid standard curve was not a suitable way to determine carboxypeptidase in complicate mixture, and it probably led to overestimated carboxypeptidase activity. It was found that addition of crude exact into pure amino acid standard curve had a significant difference from pure amino acid standard curve method (p < 0.05). There was no significant enzyme activity difference (p > 0.05) between addition of active crude exact and addition of inactive crude kind, when the proper dilute multiple was used. It was concluded that the addition of crude enzyme extract to the calibration was needed to eliminate the interference of free amino acids and related compounds presented in crude enzyme extract.
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...
2017-09-07
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
Brunet, Bertrand R.; Barnes, Allan J.; Scheidweiler, Karl B.; Mura, Patrick
2009-01-01
A sensitive and specific method is presented to simultaneously quantify methadone, heroin, cocaine and metabolites in sweat. Drugs were eluted from sweat patches with sodium acetate buffer, followed by SPE and quantification by GC/MS with electron impact ionization and selected ion monitoring. Daily calibration for anhydroecgonine methyl ester, ecgonine methyl ester, cocaine, benzoylecgonine (BE), codeine, morphine, 6-acetylcodeine, 6-acetylmorphine (6AM), heroin (5–1000 ng/patch) and methadone (10–1000 ng/patch) achieved determination coefficients of >0.995, and calibrators quantified to within ±20% of the target concentrations. Extended calibration curves (1000–10,000 ng/patch) were constructed for methadone, cocaine, BE and 6AM by modifying injection techniques. Within (N=5) and between-run (N=20) imprecisions were calculated at six control levels across the dynamic ranges with coefficients of variation of <6.5%. Accuracies at these concentrations were ±11.9% of target. Heroin hydrolysis during specimen processing was <11%. This novel assay offers effective monitoring of drug exposure during drug treatment, workplace and criminal justice monitoring programs. PMID:18607576
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Sun, B; Li, H
Purpose: The current standard for calculation of photon and electron dose requires conversion of Hounsfield Units (HU) to Electron Density (ED) by applying a calibration curve specifically constructed for the corresponding CT tube voltage. This practice limits the use of the CT scanner to a single tube voltage and hinders the freedom in the selection of optimal tube voltage for better image quality. The objective of this study is to report a prototype CT reconstruction algorithm that provides direct ED images from the raw CT data independently of tube voltages used during acquisition. Methods: A tissue substitute phantom was scannedmore » for Stoichiometric CT calibrations at tube voltages of 70kV, 80kV, 100kV, 120kV and 140kV respectively. HU images and direct ED images were acquired sequentially on a thoracic anthropomorphic phantom at the same tube voltages. Electron densities converted from the HU images were compared to ED obtained from the direct ED images. A 7-field treatment plan was made on all HU and ED images. Gamma analysis was performed to demonstrate quantitatively dosimetric change from the two schemes in acquiring ED. Results: The average deviation of EDs obtained from the direct ED images was −1.5%±2.1% from the EDs from HU images with the corresponding CT calibration curves applied. Gamma analysis on dose calculated on the direct ED images and the HU images acquired at the same tube voltage indicated negligible difference with lowest passing rate at 99.9%. Conclusion: Direct ED images require no CT calibration while demonstrate equivalent dosimetry compared to that obtained from standard HU images. The ability of acquiring direct ED images simplifies the current practice at a safer level by eliminating CT calibration and HU conversion from commissioning and treatment planning respectively. Furthermore, it unlocks a wider range of tube voltages in CT scanner for better imaging quality while maintaining similar dosimetric accuracy.« less
Calibrations between the variables of microbial TTI response and ground pork qualities.
Kim, Eunji; Choi, Dong Yeol; Kim, Hyun Chul; Kim, Keehyuk; Lee, Seung Ju
2013-10-01
A time-temperature indicator (TTI) based on a lactic acid bacterium, Weissella cibaria CIFP009, was applied to ground pork packaging. Calibration curves between TTI response and pork qualities were obtained from storage tests at 2°C, 10°C, and 13°C. The curves of the TTI vs. total cell number at different temperatures coincided to the greatest extent, indicating the highest representativeness of calibration, by showing the least coefficient of variance (CV=11%) of the quality variables at a given TTI response (titratable acidity) on the curves, followed by pH (23%), volatile basic nitrogen (VBN) (25%), and thiobarbituric acid-reactive substances (TBARS) (47%). Similarity of Arrhenius activation energy (Ea) could also reflect the representativeness of calibration. The total cell number (104.9 kJ/mol) was found to be the most similar to that of the TTI response (106.2 kJ/mol), followed by pH (113.6 kJ/mol), VBN (77.4 kJ/mol), and TBARS (55.0 kJ/mol). Copyright © 2013 Elsevier Ltd. All rights reserved.
Flight calibration tests of a nose-boom-mounted fixed hemispherical flow-direction sensor
NASA Technical Reports Server (NTRS)
Armistead, K. H.; Webb, L. D.
1973-01-01
Flight calibrations of a fixed hemispherical flow angle-of-attack and angle-of-sideslip sensor were made from Mach numbers of 0.5 to 1.8. Maneuvers were performed by an F-104 airplane at selected altitudes to compare the measurement of flow angle of attack from the fixed hemispherical sensor with that from a standard angle-of-attack vane. The hemispherical flow-direction sensor measured differential pressure at two angle-of-attack ports and two angle-of-sideslip ports in diametrically opposed positions. Stagnation pressure was measured at a center port. The results of these tests showed that the calibration curves for the hemispherical flow-direction sensor were linear for angles of attack up to 13 deg. The overall uncertainty in determining angle of attack from these curves was plus or minus 0.35 deg or less. A Mach number position error calibration curve was also obtained for the hemispherical flow-direction sensor. The hemispherical flow-direction sensor exhibited a much larger position error than a standard uncompensated pitot-static probe.
Preliminary calibration of the ACP safeguards neutron counter
NASA Astrophysics Data System (ADS)
Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.
2007-10-01
The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.
Skrdla, Peter J; Zhang, Dan
2014-03-01
The crystalline citrate salt (CS) of a developmental pharmaceutical compound, MK-Q, was investigated in this work from two different, but related, perspectives. In the first part of the paper, the apparent disproportionation kinetics were surveyed using two different slurry systems, one containing water and the other a pH 6.9 phosphate buffer, using time-dependent measurements of the solution pH or by acquiring online Raman spectra of the solids. While the CS is generally stable when stored as a solid under ambient conditions of temperature and humidity, its low pHmax (<3) facilitates rapid disproportionation in aqueous solution, particularly at higher pH values. The rate of disappearance of the CS was found to obey first-order (Noyes-Whitney/dissolution rate-limited) kinetics, however, the formation of the crystalline product form in the slurry system was observed to exhibit kinetics consistent with a heterogeneous nucleation-and-growth mechanism. In the second part of this paper, more sensitive offline measurements made using XRPD, DSC and FT-Raman spectroscopy were applied to the characterization of binary physical mixtures of the CS and free base (FB) crystalline forms of MK-Q to obtain a calibration curve for each technique. It was found that all calibration plots exhibited good linearity of response, with the limit of detection (LOD) for each technique estimated to be ≤7 wt% FB. While additional calibration curves would need to be constructed to allow for accurate quantitation in various slurry systems, the general feasibility of these techniques is demonstrated for detecting low levels of CS disproportionation. Copyright © 2013 Elsevier B.V. All rights reserved.
Rahman, Md Musfiqur; Abd El-Aty, A M; Na, Tae-Woong; Park, Joon-Seong; Kabir, Md Humayun; Chung, Hyung Suk; Lee, Han Sol; Shin, Ho-Chul; Shim, Jae-Han
2017-08-15
A simultaneous analytical method was developed for the determination of methiocarb and its metabolites, methiocarb sulfoxide and methiocarb sulfone, in five livestock products (chicken, pork, beef, table egg, and milk) using liquid chromatography-tandem mass spectrometry. Due to the rapid degradation of methiocarb and its metabolites, a quick sample preparation method was developed using acetonitrile and salts followed by purification via dispersive- solid phase extraction (d-SPE). Seven-point calibration curves were constructed separately in each matrix, and good linearity was observed in each matrix-matched calibration curve with a coefficient of determination (R 2 ) ≥ 0.991. The limits of detection and quantification were 0.0016 and 0.005mg/kg, respectively, for all tested analytes in various matrices. The method was validated in triplicate at three fortification levels (equivalent to 1, 2, and 10 times the limit of quantification) with a recovery rate ranging between 76.4-118.0% and a relative standard deviation≤10.0%. The developed method was successfully applied to market samples, and no residues of methiocarb and/or its metabolites were observed in the tested samples. In sum, this method can be applied for the routine analysis of methiocarb and its metabolites in foods of animal origins. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zegers, R. P. C.; Yu, M.; Bekdemir, C.; Dam, N. J.; Luijten, C. C. M.; de Goey, L. P. H.
2013-08-01
Planar laser-induced fluorescence (LIF) of toluene has been applied in an optical engine and a high-pressure cell, to determine temperatures of fuel sprays and in-cylinder vapors. The method relies on a redshift of the toluene LIF emission spectrum with increasing temperature. Toluene fluorescence is recorded simultaneously in two disjunct wavelength bands by a two-camera setup. After calibration, the pixel-by-pixel LIF signal ratio is a proxy for the local temperature. A detailed measurement procedure is presented to minimize measurement inaccuracies and to improve precision. n-Heptane is used as the base fuel and 10 % of toluene is added as a tracer. The toluene LIF method is capable of measuring temperatures up to 700 K; above that the signal becomes too weak. The precision of the spray temperature measurements is 4 % and the spatial resolution 1.3 mm. We pay particular attention to the construction of the calibration curve that is required to translate LIF signal ratios into temperature, and to possible limitations in the portability of this curve between different setups. The engine results are compared to those obtained in a constant-volume high-pressure cell, and the fuel spray results obtained in the high-pressure cell are also compared to LES simulations. We find that the hot ambient gas entrained by the head vortex gives rise to a hot zone on the spray axis.
Quantification of chitinase and thaumatin-like proteins in grape juices and wines.
Le Bourse, D; Conreux, A; Villaume, S; Lameiras, P; Nuzillard, J-M; Jeandet, P
2011-09-01
Chitinases and thaumatin-like proteins are important grape proteins as they have a great influence on wine quality. The quantification of these proteins in grape juices and wines, along with their purification, is therefore crucial to study their intrinsic characteristics and the exact role they play in wines. The main isoforms of these two proteins from Chardonnay grape juice were thus purified by liquid chromatography. Two fast protein liquid chromatography (FLPC) steps allowed the fractionation and purification of the juice proteins, using cation exchange and hydrophobic interaction media. A further high-performance liquid chromatography (HPLC) step was used to achieve higher purity levels. Fraction assessment was achieved by mass spectrometry. Fraction purity was determined by HPLC to detect the presence of protein contaminants, and by nuclear magnetic resonance (NMR) spectroscopy to detect the presence of organic contaminants. Once pure fractions of lyophilized chitinase and thaumatin-like protein were obtained, ultra-HPLC (UHPLC) and enzyme-linked immunosorbent assay (ELISA) calibration curves were constructed. The quantification of these proteins in different grape juice and wine samples was thus achieved for the first time with both techniques through comparison with the purified protein calibration curve. UHPLC and ELISA showed very consistent results (less than 16% deviation for both proteins) and either could be considered to provide an accurate and reliable quantification of proteins in the oenology field.
Wu, Cai; Li, Liang
2018-05-15
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.
Use of Vertically Integrated Ice in WRF-Based Forecasts of Lightning Threat
NASA Technical Reports Server (NTRS)
McCaul, E. W., jr.; Goodman, S. J.
2008-01-01
Previously reported methods of forecasting lightning threat using fields of graupel flux from WRF simulations are extended to include the simulated field of vertically integrated ice within storms. Although the ice integral shows less temporal variability than graupel flux, it provides more areal coverage, and can thus be used to create a lightning forecast that better matches the areal coverage of the lightning threat found in observations of flash extent density. A blended lightning forecast threat can be constructed that retains much of the desirable temporal sensitivity of the graupel flux method, while also incorporating the coverage benefits of the ice integral method. The graupel flux and ice integral fields contributing to the blended forecast are calibrated against observed lightning flash origin density data, based on Lightning Mapping Array observations from a series of case studies chosen to cover a wide range of flash rate conditions. Linear curve fits that pass through the origin are found to be statistically robust for the calibration procedures.
Analysis of bakery products by laser-induced breakdown spectroscopy.
Bilge, Gonca; Boyacı, İsmail Hakkı; Eseller, Kemal Efe; Tamer, Uğur; Çakır, Serhat
2015-08-15
In this study, we focused on the detection of Na in bakery products by using laser-induced breakdown spectroscopy (LIBS) as a quick and simple method. LIBS experiments were performed to examine the Na at 589 nm to quantify NaCl. A series of standard bread sample pellets containing various concentrations of NaCl (0.025-3.5%) were used to construct the calibration curves and to determine the detection limits of the measurements. Calibration graphs were drawn to indicate functions of NaCl and Na concentrations, which showed good linearity in the range of 0.025-3.5% NaCl and 0.01-1.4% Na concentrations with correlation coefficients (R(2)) values greater than 0.98 and 0.96. The obtained detection limits for NaCl and Na were 175 and 69 ppm, respectively. Performed experimental studies showed that LIBS is a convenient method for commercial bakery products to quantify NaCl concentrations as a rapid and in situ technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds.
Ozcan, Hakki Mevlut; Sagiroglu, Ayten
2014-08-01
In this study, a novel fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds was developed. The biosensor was constructed by immobilizing tissue homogenate of fresh broad (Vicia faba) on to glassy carbon electrode. For the stability of the biosensor, general immobilization techniques were used to secure the fresh broad tissue homogenate in gelatin-glutaraldehyde cross-linking matrix. In the optimization and characterization studies, the amount of fresh broad tissue homogenate and gelatin, glutaraldehyde percentage, optimum pH, optimum temperature and optimum buffer concentration, thermal stability, interference effects, linear range, storage stability, repeatability and sample applications (Wine, beer, fruit juices) were also investigated. Besides, the detection ranges of thirteen phenolic compounds were obtained with the help of the calibration graphs. A typical calibration curve for the sensor revealed a linear range of 5-60 μM catechol. In reproducibility studies, variation coefficient (CV) and standard deviation (SD) were calculated as 1.59%, 0.64×10(-3) μM, respectively.
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.
2018-01-01
This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
1996-01-01
Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.
Fast and robust curve skeletonization for real-world elongated objects
USDA-ARS?s Scientific Manuscript database
These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...
Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Kobayashi, Ikuo
2015-07-01
For X-ray diagnosis, the proper management of the entrance skin dose (ESD) is important. Recently, a small-type optically stimulated luminescence dosimeter (nanoDot OSL dosimeter) was made commercially available by Landauer, and it is hoped that it will be used for ESD measurements in clinical settings. Our objectives in the present study were to propose a method for calibrating the ESD measured with the nanoDot OSL dosimeter and to evaluate its accuracy. The reference ESD is assumed to be based on an air kerma with consideration of a well-known back scatter factor. We examined the characteristics of the nanoDot OSL dosimeter using two experimental conditions: a free air irradiation to derive the air kerma, and a phantom experiment to determine the ESD. For evaluation of the ability to measure the ESD, a calibration curve for the nanoDot OSL dosimeter was determined in which the air kerma and/or the ESD measured with an ionization chamber were used as references. As a result, we found that the calibration curve for the air kerma was determined with an accuracy of 5 %. Furthermore, the calibration curve was applied to the ESD estimation. The accuracy of the ESD obtained was estimated to be 15 %. The origin of these uncertainties was examined based on published papers and Monte-Carlo simulation. Most of the uncertainties were caused by the systematic uncertainty of the reading system and the differences in efficiency corresponding to different X-ray energies.
NASA Astrophysics Data System (ADS)
de Jesus, Alexandre; Zmozinski, Ariane Vanessa; Damin, Isabel Cristina Ferreira; Silva, Márcia Messias; Vale, Maria Goreti Rodrigues
2012-05-01
In this work, a direct sampling graphite furnace atomic absorption spectrometry method has been developed for the determination of arsenic and cadmium in crude oil samples. The samples were weighed directly on the solid sampling platforms and introduced into the graphite tube for analysis. The chemical modifier used for both analytes was a mixture of 0.1% Pd + 0.06% Mg + 0.06% Triton X-100. Pyrolysis and atomization curves were obtained for both analytes using standards and samples. Calibration curves with aqueous standards could be used for both analytes. The limits of detection obtained were 5.1 μg kg- 1 for arsenic and 0.2 μg kg- 1 for cadmium, calculated for the maximum amount of sample that can be analyzed (8 mg and 10 mg) for arsenic and cadmium, respectively. Relative standard deviations lower than 20% were obtained. For validation purposes, a calibration curve was constructed with the SRM 1634c and aqueous standards for arsenic and the results obtained for several crude oil samples were in agreement according to paired t-test. The result obtained for the determination of arsenic in the SRM against aqueous standards was also in agreement with the certificate value. As there is no crude oil or similar reference material available with a certified value for cadmium, a digestion in an open vessel under reflux using a "cold finger" was adopted for validation purposes. The use of paired t-test showed that the results obtained by direct sampling and digestion were in agreement at a 95% confidence level. Recovery tests were carried out with inorganic and organic standards and the results were between 88% and 109%. The proposed method is simple, fast and reliable, being appropriated for routine analysis.
NASA Astrophysics Data System (ADS)
da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham
2017-06-01
The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...
2018-02-13
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
NASA Astrophysics Data System (ADS)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.
2018-03-01
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
Nieć, Dawid; Kunicki, Paweł K
2015-10-01
Measurements of plasma concentrations of free normetanephrine (NMN), metanephrine (MN) and methoxytyramine (MTY) constitute the most diagnostically accurate screening test for pheochromocytomas and paragangliomas. The aim of this article is to present the results from a validation of an analytical method utilizing high performance liquid chromatography with coulometric detection (HPLC-CD) for quantifying plasma free NMN, MN and MTY. Additionally, peak integration by height and area and the use of one calibration curve for all batches or individual calibration curve for each batch of samples was explored as to determine the optimal approach with regard to accuracy and precision. The method was validated using charcoal stripped plasma spiked with solutions of NMN, MN, MTY and internal standard (4-hydroxy-3-methoxybenzylamine) with the exception of selectivity which was evaluated by analysis of real plasma samples. Calibration curve performance, accuracy, precision and recovery were determined following both peak-area and peak-height measurements and the obtained results were compared. The most accurate and precise method of calibration was evaluated by analyzing quality control samples at three concentration levels in 30 analytical runs. The detector response was linear over the entire tested concentration range from 10 to 2000pg/mL with R(2)≥0.9988. The LLOQ was 10pg/mL for each analyte of interest. To improve accuracy for measurements at low concentrations, a weighted (1/amount) linear regression model was employed, which resulted in inaccuracies of -2.48 to 9.78% and 0.22 to 7.81% following peak-area and peak-height integration, respectively. The imprecisions ranged from 1.07 to 15.45% and from 0.70 to 11.65% for peak-area and peak-height measurements, respectively. The optimal approach to calibration was the one utilizing an individual calibration curve for each batch of samples and peak-height measurements. It was characterized by inaccuracies ranging from -3.39 to +3.27% and imprecisions from 2.17 to 13.57%. The established HPLC-CD method enables accurate and precise measurements of plasma free NMN, MN and MTY with reasonable selectivity. Preparing calibration curve based on peak-height measurements for each batch of samples yields optimal accuracy and precision. Copyright © 2015. Published by Elsevier B.V.
Can hydraulic-modelled rating curves reduce uncertainty in high flow data?
NASA Astrophysics Data System (ADS)
Westerberg, Ida; Lam, Norris; Lyon, Steve W.
2017-04-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show the potential of the hydraulically-modelled curves, particularly where the calibration gaugings are of high quality and cover a wide range of flow conditions.
Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S
2010-07-12
A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. Copyright 2010 Elsevier B.V. All rights reserved.
Cscibox: A Software System for Age-Model Construction and Evaluation
NASA Astrophysics Data System (ADS)
Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.
2014-12-01
CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.
Troncoso, N; Sierra, H; Carvajal, L; Delpiano, P; Günther, G
2005-12-23
An improved HPLC method is reported for the determination of rosemary's principal phenolic antioxidants, rosmarinic and carnosic acids, providing a fast and simultaneous determination for both of them by using a solid phase column. The analysis was performed with fresh methanolic extractions of Rosmarinus officinalis. To quantify the amount of antioxidants in a fast and reproducible way by means of UV-vis absorption measurements, a spectrophotometric multi-wavelength calibration curve was constructed based on the antioxidant contents obtained with the recently developed HPLC method. This UV-vis methodology can be extended to the determination of other compounds and herbs if the restrictions mentioned in the text are respected.
Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.
Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D
2017-01-17
In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.
Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc
2004-03-01
Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.
Monitoring of toxic elements present in sludge of industrial waste using CF-LIBS.
Kumar, Rohit; Rai, Awadhesh K; Alamelu, Devanathan; Aggarwal, Suresh K
2013-01-01
Industrial waste is one of the main causes of environmental pollution. Laser-induced breakdown spectroscopy (LIBS) was applied to detect the toxic metals in the sludge of industrial waste water. Sludge on filter paper was obtained after filtering the collected waste water samples from different sections of a water treatment plant situated in an industrial area of Kanpur City. The LIBS spectra of the sludge samples were recorded in the spectral range of 200 to 500 nm by focusing the laser light on sludge. Calibration-free laser-induced breakdown spectroscopy (CF-LIBS) technique was used for the quantitative measurement of toxic elements such as Cr and Pb present in the sample. We also used the traditional calibration curve approach to quantify these elements. The results obtained from CF-LIBS are in good agreement with the results from the calibration curve approach. Thus, our results demonstrate that CF-LIBS is an appropriate technique for quantitative analysis where reference/standard samples are not available to make the calibration curve. The results of the present experiment are alarming to the people living nearby areas of industrial activities, as the concentrations of toxic elements are quite high compared to the admissible limits of these substances.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2016-04-01
The aim of this study is to investigate how the inclusion of uncertainties in inputs and observed streamflow influence the parameter estimation, streamflow predictions and model evaluation. In particular we wanted to answer the following research questions: • What is the effect of including a random error in the precipitation and temperature inputs? • What is the effect of decreased information about precipitation by excluding the nearest precipitation station? • What is the effect of the uncertainty in streamflow observations? • What is the effect of reduced information about the true streamflow by using a rating curve where the measurement of the highest and lowest streamflow is excluded when estimating the rating curve? To answer these questions, we designed a set of calibration experiments and evaluation strategies. We used the elevation distributed HBV model operating on daily time steps combined with a Bayesian formulation and the MCMC routine Dream for parameter inference. The uncertainties in inputs was represented by creating ensembles of precipitation and temperature. The precipitation ensemble were created using a meta-gaussian random field approach. The temperature ensembles were created using a 3D Bayesian kriging with random sampling of the temperature laps rate. The streamflow ensembles were generated by a Bayesian multi-segment rating curve model. Precipitation and temperatures were randomly sampled for every day, whereas the streamflow ensembles were generated from rating curve ensembles, and the same rating curve was always used for the whole time series in a calibration or evaluation run. We chose a catchment with a meteorological station measuring precipitation and temperature, and a rating curve of relatively high quality. This allowed us to investigate and further test the effect of having less information on precipitation and streamflow during model calibration, predictions and evaluation. The results showed that including uncertainty in the precipitation and temperature input has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Reduced information in precipitation input resulted in a and a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using wrong rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions obtained using a wrong rating curve, the evaluation scores varies depending on the true rating curve. Generally, the best evaluation scores were not achieved for the rating curve used for calibration, but for a rating curves giving low variance in streamflow observations. Reduced information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores giving both better and worse scores. This case study shows that estimating the water balance is challenging since both precipitation inputs and streamflow observations have pronounced systematic component in their uncertainties.
An extended CFD model to predict the pumping curve in low pressure plasma etch chamber
NASA Astrophysics Data System (ADS)
Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu
2014-12-01
Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.
NASA Astrophysics Data System (ADS)
Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.
2017-11-01
Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.
Forzley, Brian; Er, Lee; Chiu, Helen Hl; Djurdjev, Ognjenka; Martinusen, Dan; Carson, Rachel C; Hargrove, Gaylene; Levin, Adeera; Karim, Mohamud
2018-02-01
End-stage kidney disease is associated with poor prognosis. Health care professionals must be prepared to address end-of-life issues and identify those at high risk for dying. A 6-month mortality prediction model for patients on dialysis derived in the United States is used but has not been externally validated. We aimed to assess the external validity and clinical utility in an independent cohort in Canada. We examined the performance of the published 6-month mortality prediction model, using discrimination, calibration, and decision curve analyses. Data were derived from a cohort of 374 prevalent dialysis patients in two regions of British Columbia, Canada, which included serum albumin, age, peripheral vascular disease, dementia, and answers to the "the surprise question" ("Would I be surprised if this patient died within the next year?"). The observed mortality in the validation cohort was 11.5% at 6 months. The prediction model had reasonable discrimination (c-stat = 0.70) but poor calibration (calibration-in-the-large = -0.53 (95% confidence interval: -0.88, -0.18); calibration slope = 0.57 (95% confidence interval: 0.31, 0.83)) in our data. Decision curve analysis showed the model only has added value in guiding clinical decision in a small range of threshold probabilities: 8%-20%. Despite reasonable discrimination, the prediction model has poor calibration in this external study cohort; thus, it may have limited clinical utility in settings outside of where it was derived. Decision curve analysis clarifies limitations in clinical utility not apparent by receiver operating characteristic curve analysis. This study highlights the importance of external validation of prediction models prior to routine use in clinical practice.
Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Joshi, H; Saunderson, J R; Beavis, A W
2016-11-07
The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQ m and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.
NASA Astrophysics Data System (ADS)
Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Joshi, H.; Saunderson, J. R.; Beavis, A. W.
2016-11-01
The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQm and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.
X-ray Diffraction Crystal Calibration and Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael J. Haugh; Richard Stewart; Nathan Kugland
2009-06-05
National Security Technologies’ X-ray Laboratory is comprised of a multi-anode Manson type source and a Henke type source that incorporates a dual goniometer and XYZ translation stage. The first goniometer is used to isolate a particular spectral band. The Manson operates up to 10 kV and the Henke up to 20 kV. The Henke rotation stages and translation stages are automated. Procedures have been developed to characterize and calibrate various NIF diagnostics and their components. The diagnostics include X-ray cameras, gated imagers, streak cameras, and other X-ray imaging systems. Components that have been analyzed include filters, filter arrays, grazing incidencemore » mirrors, and various crystals, both flat and curved. Recent efforts on the Henke system are aimed at characterizing and calibrating imaging crystals and curved crystals used as the major component of an X-ray spectrometer. The presentation will concentrate on these results. The work has been done at energies ranging from 3 keV to 16 keV. The major goal was to evaluate the performance quality of the crystal for its intended application. For the imaging crystals we measured the laser beam reflection offset from the X-ray beam and the reflectivity curves. For the curved spectrometer crystal, which was a natural crystal, resolving power was critical. It was first necessary to find sources of crystals that had sufficiently narrow reflectivity curves. It was then necessary to determine which crystals retained their resolving power after being thinned and glued to a curved substrate.« less
Nuclear Gauge Calibration and Testing Guidelines for Hawaii
DOT National Transportation Integrated Search
2006-12-15
Project proposal brief: AASHTO and ASTM nuclear gauge testing procedures can lead to misleading density and moisture readings for certain Hawaiian soils. Calibration curves need to be established for these unique materials, along with clear standard ...
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
NASA Astrophysics Data System (ADS)
Hulsman, P.; Bogaard, T.; Savenije, H. H. G.
2016-12-01
In hydrology and water resources management, discharge is the main time series for model calibration. Rating curves are needed to derive discharge from continuously measured water levels. However, assuring their quality is demanding due to dynamic changes and problems in accurately deriving discharge at high flows. This is valid everywhere, but even more in African socio-economic context. To cope with these uncertainties, this study proposes to use water levels instead of discharge data for calibration. Also uncertainties in rainfall measurements, especially the spatial heterogeneity needs to be considered. In this study, the semi-distributed rainfall runoff model FLEX-Topo was applied to the Mara River Basin. In this model seven sub-basins were distinguished and four hydrological response units with each a unique model structure based on the expected dominant flow processes. Parameter and process constrains were applied to exclude unrealistic results. To calibrate the model, the water levels were back-calculated from modelled discharges, using cross-section data and the Strickler formula calibrating parameter `k•s1/2', and compared to measured water levels. The model simulated the water depths well for the entire basin and the Nyangores sub-basin in the north. However, the calibrated and observed rating curves differed significantly at the basin outlet, probably due to uncertainties in the measured discharge, but at Nyangores they were almost identical. To assess the effect of rainfall uncertainties on the hydrological model, the representative rainfall in each sub-basin was estimated with three different methods: 1) single station, 2) average precipitation, 3) areal sub-division using Thiessen polygons. All three methods gave on average similar results, but method 1 resulted in more flashy responses, method 2 dampened the water levels due to averaging the rainfall and method 3 was a combination of both. In conclusion, in the case of unreliable rating curves, water level data can be used instead and a new rating curve can be calibrated. The effect of rainfall uncertainties on the hydrological model was insignificant.
Zastrow, Stefan; Brookman-May, Sabine; Cong, Thi Anh Phuong; Jurk, Stanislaw; von Bar, Immanuel; Novotny, Vladimir; Wirth, Manfred
2015-03-01
To predict outcome of patients with renal cell carcinoma (RCC) who undergo surgical therapy, risk models and nomograms are valuable tools. External validation on independent datasets is crucial for evaluating accuracy and generalizability of these models. The objective of the present study was to externally validate the postoperative nomogram developed by Karakiewicz et al. for prediction of cancer-specific survival. A total of 1,480 consecutive patients with a median follow-up of 82 months (IQR 46-128) were included into this analysis with 268 RCC-specific deaths. Nomogram-estimated survival probabilities were compared with survival probabilities of the actual cohort, and concordance indices were calculated. Calibration plots and decision curve analyses were used for evaluating calibration and clinical net benefit of the nomogram. Concordance between predictions of the nomogram and survival rates of the cohort was 0.911 after 12, 0.909 after 24 months and 0.896 after 60 months. Comparison of predicted probabilities and actual survival estimates with calibration plots showed an overestimation of tumor-specific survival based on nomogram predictions of high-risk patients, although calibration plots showed a reasonable calibration for probability ranges of interest. Decision curve analysis showed a positive net benefit of nomogram predictions for our patient cohort. The postoperative Karakiewicz nomogram provides a good concordance in this external cohort and is reasonably calibrated. It may overestimate tumor-specific survival in high-risk patients, which should be kept in mind when counseling patients. A positive net benefit of nomogram predictions was proven.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrison, H; Menon, G; Sloboda, R
The purpose of this study was to investigate the accuracy of radiochromic film calibration procedures used in external beam radiotherapy when applied to I-125 brachytherapy sources delivering higher doses, and to determine any necessary modifications to achieve similar accuracy in absolute dose measurements. GafChromic EBT3 film was used to measure radiation doses upwards of 35 Gy from 6 MV, 75 kVp and (∼28 keV) I-125 photon sources. A custom phantom was used for the I-125 irradiations to obtain a larger film area with nearly constant dose to reduce the effects of film heterogeneities on the optical density (OD) measurements. RGBmore » transmission images were obtained with an Epson 10000XL flatbed scanner, and calibration curves relating OD and dose using a rational function were determined for each colour channel and at each energy using a non-linear least square minimization method. Differences found between the 6 MV calibration curve and those for the lower energy sources are large enough that 6 MV beams should not be used to calibrate film for low-energy sources. However, differences between the 75 kVp and I-125 calibration curves were quite small; indicating that 75 kVp is a good choice. Compared with I-125 irradiation, this gives the advantages of lower type B uncertainties and markedly reduced irradiation time. To obtain high accuracy calibration for the dose range up to 35 Gy, two-segment piece-wise fitting was required. This yielded absolute dose measurement accuracy above 1 Gy of ∼2% for 75 kVp and ∼5% for I-125 seed exposures.« less
NASA Astrophysics Data System (ADS)
Bilardi, S.; Barjatya, A.; Gasdia, F.
OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.
The direct determination of dose-to-water using a water calorimeter.
Schulz, R J; Wuu, C S; Weinhous, M S
1987-01-01
A flexible, temperature-regulated, water calorimeter has been constructed which consists of three nested cylinders. The innermost "core" is a 10 X 10 cm right cylinder made of glass, the contents of which are isolated from the environment. It has two Teflon-washered glass valves for filling, and two thermistors are supported at the center by glass capillary tubes. Surrounding the core is a "jacket" that provides approximately 2 cm of air insulation between the core and the "shield." The shield surrounds the jacket with a 2.5-cm layer of temperature-regulated water flowing at 51/min. The core is filled with highly purified water the gas content of which is established prior to filling. Convection currents, which may be induced by dose gradients or thermistor power dissipation, are eliminated by operating the calorimeter at 4 degrees C. Depending upon the power level of the thermistors, 15-200 microW, and the insulation provided by the glass capillary tubing, the temperature of the thermistors is higher than that of the surrounding water. To minimize potential errors caused by differences between calibration curves obtained at finite power levels, the zero-power-level calibration curve obtained by extrapolation is employed. Also the calorimeter response is corrected for the change in power level, and therefore thermistor temperature, that follows the resistance change caused by irradiation. The response of the calorimeter to 4-MV x rays has been compared to that of an ionization chamber irradiated in an identical geometry.(ABSTRACT TRUNCATED AT 250 WORDS)
Yao, Shun-chun; Chen, Jian-chao; Lu, Ji-dong; Shen, Yue-liang; Pan, Gang
2015-06-01
In coal-fired plants, Unburned carbon (UC) in fly ash is the major determinant of combustion efficiency in coal-fired boiler. The balance between unburned carbon and NO(x) emissions stresses the need for rapid and accurate methods for the measurement of unburned carbon. Laser-induced breakdown spectroscopy (LIBS) is employed to measure the unburned carbon content in fly ash. In this case, it is found that the C line interference with Fe line at about 248 nm. The interference leads to C could not be quantified independently from Fe. A correction approach for extracting C integrated intensity from the overlapping peak is proposed. The Fe 248.33 nm, Fe 254.60 nm and Fe 272.36 nm lines are used to correct the Fe 247.98 nm line which interference with C 247.86 nm, respectively. Then, the corrected C integrated intensity is compared with the uncorrected C integrated intensity for constructing calibration curves of unburned carbon, and also for the precision and accuracy of repeat measurements. The analysis results show that the regression coefficients of the calibration curves and the precision and accuracy of repeat measurements are improved by correcting C-Fe interference, especially for the fly ash samples with low level unburned carbon content. However, the choice of the Fe line need to avoid a over-correction for C line. Obviously, Fe 254.60 nm is the best
Ramanah, Rajeev; Omar, Sikiyah; Guillien, Alicia; Pugin, Aurore; Martin, Alain; Riethmuller, Didier; Mottet, Nicolas
2018-06-01
Nomograms are statistical models that combine variables to obtain the most accurate and reliable prediction for a particular risk. Fetal heart rate (FHR) interpretation alone has been found to be poorly predictive for fetal acidosis while other clinical risk factors exist. The aim of this study was to create and validate a nomogram based on FHR patterns and relevant clinical parameters to provide a non-invasive individualized prediction of umbilical artery pH during labour. A retrospective observational study was conducted on 4071 patients in labour presenting singleton pregnancies at >34 gestational weeks and delivering vaginally. Clinical characteristics, FHR patterns and umbilical cord gas of 1913 patients were used to construct a nomogram predicting an umbilical artery (Ua) pH <7.18 (10th centile of the study population) after an univariate and multivariate stepwise logistic regression analysis. External validation was obtained from an independent cohort of 2158 patients. Area under the receiver operating characteristics (ROC) curve, sensitivity, specificity, positive and negative predictive values of the nomogram were determined. Upon multivariate analysis, parity (p < 0.01), induction of labour (p = 0.01), a prior uterine scar (p = 0.02), maternal fever (p = 0.02) and the type of FHR (p < 0.01) were significantly associated with an Ua pH <7.18 (p < 0.05). Apgar score at 1, 5 and 10 min were significantly lower in the group with an Ua pH <7.18 (p < 0.01). The nomogram constructed had a Concordance Index of 0.75 (area under the curve) with a sensitivity of 57%, a specificity of 91%, a negative predictive value of 5% and a positive predictive value of 99%. Calibration found no difference between the predicted probabilities and the observed rate of Ua pH <7.18 (p = 0.63). The validation set had a Concordance Index of 0.72 and calibration with a p < 0.77. We successfully developed and validated a nomogram to predict Ua pH by combining easily available clinical variables and FHR. Discrimination and calibration of the model were statistically good. This mathematical tool can help clinicians in the management of labour by predicting umbilical artery pH based on FHR tracings. Copyright © 2018 Elsevier B.V. All rights reserved.
A simple diagnostic model for ruling out pneumoconiosis among construction workers.
Suarthana, Eva; Moons, Karel G M; Heederik, Dick; Meijer, Evert
2007-09-01
Construction workers exposed to silica-containing dust are at risk of developing silicosis even at low exposure levels. Health surveillance among these workers is commonly advised but the exact diagnostic work-up is not specified and therefore may result in unnecessary chest x ray investigations. To develop a simple diagnostic model to estimate the probability of an individual worker having pneumoconiosis from questionnaire and spirometry results, in order to accurately rule out workers without pneumoconiosis. The study was performed using cross-sectional data of 1291 Dutch natural stone and construction workers with potentially high quartz dust exposure. A multivariable logistic regression model was developed using chest x ray with ILO profusion category > or =1/1 as the reference standard. The model's calibration was evaluated with the Hosmer-Lemeshow test; the discriminative ability was determined by calculating the area under the receiver operating characteristic curve (ROC area). Internal validity of the final model was assessed by a bootstrapping procedure. For clinical application, the diagnostic model was transformed into an easy-to-use score chart. Age 40 years or older, current smoker, high-exposure job, working 15 years or longer in the construction industry, "feeling unhealthy" and FEV1 were independent predictors in the diagnostic model. The model showed good calibration (a non-significant Hosmer-Lemeshow test) and discriminative ability (ROC area 0.81, 95% CI 0.74 to 0.85). Internal validity was reasonable; the optimism corrected ROC area was 0.76. By using a cut-off point with a high negative predictive value the occupational physician can efficiently detect a large proportion of workers with a low probability of having pneumoconiosis and exclude them from unnecessary x ray investigations. This diagnostic model is an efficient and effective instrument to rule out pneumoconiosis among construction workers. Its use in health surveillance among these workers can reduce the number of redundant x ray investigations.
NASA Technical Reports Server (NTRS)
Robertson, G.
1982-01-01
Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.
Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio
2018-04-06
To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.
GIADA: extended calibration activities before the comet encounter
NASA Astrophysics Data System (ADS)
Accolla, Mario; Sordini, Roberto; Della Corte, Vincenzo; Ferrari, Marco; Rotundi, Alessandra
2014-05-01
The Grain Impact Analyzer and Dust Accumulator - GIADA - is one of the payloads on-board Rosetta Orbiter. Its three detection sub-systems are able to measure the speed, the momentum, the mass, the optical cross section of single cometary grains and the dust flux ejected by the periodic comet 67P Churyumov-Gerasimenko. During the Hibernation phase of the Rosetta mission, we have performed a dedicated extended calibration activity on the GIADA Proto Flight Model (accommodated in a clean room in our laboratory) involving two of three sub-systems constituting GIADA, i.e. the Grain Detection System (GDS) and the Impact Sensor (IS). Our aim is to carry out a new set of response curves for these two subsystems and to correlate them with the calibration curves obtained in 2002 for the GIADA payload onboard the Rosetta spacecraft, in order to improve the interpretation of the forthcoming scientific data. For the extended calibration we have dropped or shot into GIADA PFM a statistically relevant number of grains (i.e. about 1 hundred), acting as cometary dust analogues. We have studied the response of the GDS and IS as a function of grain composition, size and velocity. Different terrestrial materials were selected as cometary analogues according to the more recent knowledge gained through the analyses of Interplanetary Dust Particles and cometary samples returned from comet 81P/Wild 2 (Stardust mission). Therefore, for each material, we have produced grains with sizes ranging from 20-500 μm in diameter, that were characterized by FESEM and micro IR spectroscopy. Therefore, the grains were shot into GIADA PFM with speed ranging between 1 and 100 ms-1. Indeed, according to the estimation reported in Fink & Rubin (2012), this range is representative of the dust particle velocity expected at the comet scenario and lies within the GIADA velocity sensitivity (i.e. 1-100 ms-1 for GDSand 1-300 ms-1for GDS+IS 1-300 ms-1). The response curves obtained using the data collected during the GIADA PFM extended calibration will be linked to the on-ground calibration data collected during the instrument qualification campaign (performed both on Flight and Spare Models, in 2002). The final aim is to rescale the Extended Calibration data obtained with the GIADA PFM to GIADA presently onboard the Rosetta spacecraft. In this work we present the experimental procedures and the setup used for the calibration activities, particularly focusing on the new response curves of GDS and IS sub-systems obtained for the different cometary dust analogues. These curves will be critical for the future interpretation of scientific data. Fink, U. & Rubin, M. (2012), The calculation of Afρ and mass loss rate for comets, Icarus, Volume 221, issue 2, p. 721-734
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction required to achieve the water quality standard was estimated. The R(2) value for the calibrated BOD5 was 0.60, which is a moderate result, and the R(2) value for the TP was 0.86, which is a good result. The percent differences obtained for the calibrated BOD5 and TP were very good; therefore, the calibration results using WMCIG were satisfactory. From the load duration curve analysis, the WQS exceedance frequencies of the BOD5 under dry conditions and low-flow conditions were 75.7% and 65%, respectively, and the exceedance frequencies under moist and mid-range conditions were higher than under other conditions. The exceedance frequencies of the TP for the high-flow, moist and mid-range conditions were high and the exceedance rate for the high-flow condition was particularly high. Most of the data from the high-flow conditions exceeded the WQSs. Thus, nonpoint-source pollutants from storm-water runoff substantially affected the TP concentration in the Gomakwoncheon. Copyright © 2015 Elsevier Ltd. All rights reserved.
Transducer Workshop (17th) Held in San Diego, California on June 22-24, 1993
1993-06-01
weight in a drop tower, such as the primer tester shown in figure 1. The calibration procedure must be repeated for each lot of copper inserts, and small...force vs. time curve (i.e impulse = area unxer the curve). The FPyF can be used in the primer tester (shown in figure 1) as well as in a weapon...microphones. Plstonphone Output 124 dB, 250 Hz DEAD WEIGHT TESTIER USED AS A PRESSURE RELEASE CALIBRATOR The dead weight tester is designed and most
NASA Astrophysics Data System (ADS)
Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.
2015-11-01
An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.
Bayesian calibration of coarse-grained forces: Efficiently addressing transferability
NASA Astrophysics Data System (ADS)
Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.
2016-04-01
Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.
Developing A New Sampling and Analysis Method for Hydrazine and Monomethyl Hydrazine
NASA Technical Reports Server (NTRS)
Allen, John R.
2002-01-01
Solid phase microextraction (SPME) will be used to develop a method for detecting monomethyl hydrazine (MMH) and hydrazine (Hz). A derivatizing agent, pentafluorobenzoyl chloride (PFBCl), is known to react readily with MMH and Hz. The SPME fiber can either be coated with PFBCl and introduced into a gaseous stream containing MMH, or PFBCl and MMH can react first in a syringe barrel and after a short equilibration period a SPME is used to sample the resulting solution. These methods were optimized and compared. Because Hz and MMH can degrade the SPME, letting the reaction occur first gave better results. Only MMH could be detected using either of these methods. Future research will concentrate on constructing calibration curves and determining the detection limit.
Glucose Sensing by Time-Resolved Fluorescence of Sol-Gel Immobilized Glucose Oxidase
Esposito, Rosario; Ventura, Bartolomeo Della; De Nicola, Sergio; Altucci, Carlo; Velotta, Raffaele; Mita, Damiano Gustavo; Lepore, Maria
2011-01-01
A monolithic silica gel matrix with entrapped glucose oxidase (GOD) was constructed as a bioactive element in an optical biosensor for glucose determination. Intrinsic fluorescence of free and immobilised GOD was investigated in the visible range in presence of different glucose concentrations by time-resolved spectroscopy with time-correlated single-photon counting detector. A three-exponential model was used for analysing the fluorescence transients. Fractional intensities and mean lifetime were shown to be sensitive to the enzymatic reaction and were used for obtaining calibration curve for glucose concentration determination. The sensing system proposed achieved high resolution (up to 0.17 mM) glucose determination with a detection range from 0.4 mM to 5 mM. PMID:22163807
ERIC Educational Resources Information Center
Anderson Koenig, Judith; Roberts, James S.
2007-01-01
Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…
Fabrication of a sensing module using micromachined biosensors.
Suzuki, H; Arakawa, H; Karube, I
2001-12-01
Micromachining is a powerful tool in constructing micro biosensors and micro systems which incorporate them. A sensing module for blood components was fabricated using the technology. The analytes include glucose, urea, uric acid, creatine, and creatinine. Transducers used to construct the corresponding sensors were a Severinghaus-type carbon dioxide electrode for the urea sensor and a Clark-type oxygen electrode for the other analytes. In these electrodes, detecting electrode patterns were formed on a glass substrate by photolithography and the micro container for the internal electrolyte solution was formed on a silicon substrate by anisotropic etching. A through-hole was formed in the sensitive area, where a silicone gas-permeable membrane was formed and an enzyme was immobilized. The sensors were characterized in terms of pH and temperature dependence and calibration curves along with detection limits. Furthermore, the sensors were incorporated in an acrylate flow cell. Simultaneous operation of these sensors was successfully conducted and distinct and stable responses were observed for respective sensors.
On the long-term stability of calibration standards in different matrices.
Kandić, A; Vukanac, I; Djurašević, M; Novković, D; Šešlak, B; Milošević, Z
2012-09-01
In order to assure Quality Control in accordance with ISO/IEC 17025, it was important, from metrological point of view, to examine the long-term stability of calibration standards previously prepared. Comprehensive reconsideration on efficiency curves with respect to the ageing of calibration standards is presented in this paper. The calibration standards were re-used after a period of 5 years and analysis of the results showed discrepancies in efficiency values. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R
2014-05-07
The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.
NASA Astrophysics Data System (ADS)
Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.
2014-05-01
The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.
Zhang, Li; Wu, Yuhua; Wu, Gang; Cao, Yinglong; Lu, Changming
2014-10-01
Plasmid calibrators are increasingly applied for polymerase chain reaction (PCR) analysis of genetically modified organisms (GMOs). To evaluate the commutability between plasmid DNA (pDNA) and genomic DNA (gDNA) as calibrators, a plasmid molecule, pBSTopas, was constructed, harboring a Topas 19/2 event-specific sequence and a partial sequence of the rapeseed reference gene CruA. Assays of the pDNA showed similar limits of detection (five copies for Topas 19/2 and CruA) and quantification (40 copies for Topas 19/2 and 20 for CruA) as those for the gDNA. Comparisons of plasmid and genomic standard curves indicated that the slopes, intercepts, and PCR efficiency for pBSTopas were significantly different from CRM Topas 19/2 gDNA for quantitative analysis of GMOs. Three correction methods were used to calibrate the quantitative analysis of control samples using pDNA as calibrators: model a, or coefficient value a (Cva); model b, or coefficient value b (Cvb); and the novel model c or coefficient formula (Cf). Cva and Cvb gave similar estimated values for the control samples, and the quantitative bias of the low concentration sample exceeded the acceptable range within ±25% in two of the four repeats. Using Cfs to normalize the Ct values of test samples, the estimated values were very close to the reference values (bias -13.27 to 13.05%). In the validation of control samples, model c was more appropriate than Cva or Cvb. The application of Cf allowed pBSTopas to substitute for Topas 19/2 gDNA as a calibrator to accurately quantify the GMO.
Finite element analysis of the Wolf Creek multispan curved girder bridge.
DOT National Transportation Integrated Search
2008-01-01
The use of curved girder bridges in highway construction has grown steadily during the last 40 years. Today, roughly 25% of newly constructed bridges have a curved alignment. Curved girder bridges have numerous complicating geometric features that di...
Calibration of the Concorde radiation detection instrument and measurements at SST altitude.
DOT National Transportation Integrated Search
1971-06-01
Performance tests were carried out on a solar cosmic radiation detection instrument developed for the Concorde SST. The instrument calibration curve (log dose-rate vs instrument reading) was reasonably linear from 0.004 to 1 rem/hr for both gamma rad...
Construction and calibration of a low cost and fully automated vibrating sample magnetometer
NASA Astrophysics Data System (ADS)
El-Alaily, T. M.; El-Nimr, M. K.; Saafan, S. A.; Kamel, M. M.; Meaz, T. M.; Assar, S. T.
2015-07-01
A low cost vibrating sample magnetometer (VSM) has been constructed by using an electromagnet and an audio loud speaker; where both are controlled by a data acquisition device. The constructed VSM records the magnetic hysteresis loop up to 8.3 KG at room temperature. The apparatus has been calibrated and tested by using magnetic hysteresis data of some ferrite samples measured by two scientifically calibrated magnetometers; model (Lake Shore 7410) and model (LDJ Electronics Inc. Troy, MI). Our VSM lab-built new design proved success and reliability.
NASA Astrophysics Data System (ADS)
Zou, Yuan; Shen, Tianxing
2013-03-01
Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.
Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A
2012-03-07
Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.
Kintzel, Polly E; Zhao, Ting; Wen, Bo; Sun, Duxin
2014-12-01
The chemical stability of a sterile admixture containing metoclopramide 1.6 mg/mL, diphenhydramine hydrochloride 2 mg/mL, and dexamethasone sodium phosphate 0.16 mg/mL in 0.9% sodium chloride injection was evaluated. Triplicate samples were prepared and stored at room temperature without light protection for a total of 48 hours. Aliquots from each sample were tested for chemical stability immediately after preparation and at 1, 4, 8, 24, and 48 hours using liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis. Metoclopramide, diphenhydramine hydrochloride, and dexamethasone sodium phosphate were selectively monitored using multiple-reaction monitoring. Samples were diluted differently for quantitation using three individual LC-MS/MS methods. To determine the drug concentration of the three compounds in the samples, three calibration curves were constructed by plotting the peak area or the peak area ratio versus the concentration of the calibration standards of each tested compound. Apixaban was used as an internal standard. Linearity of the calibration curve was evaluated by the correlation coefficient r(2). Constituents of the admixture of metoclopramide 1.6 mg/mL, diphenhydramine hydrochloride 2 mg/mL, and dexamethasone sodium phosphate 0.16 mg/mL in 0.9% sodium chloride injection retained more than 90% of their initial concentrations over 48 hours of storage at room temperature without protection from light. The observed variability in concentrations of these three compounds was within the limits of assay variability. An i.v. admixture containing metoclopramide 1.6 mg/mL, diphenhydramine hydrochloride 2 mg/mL, and dexamethasone sodium phosphate 0.16 mg/mL in 0.9% sodium chloride injection was chemically stable for 48 hours when stored at room temperature without light protection. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Radiochromic film calibration for the RQT9 quality beam
NASA Astrophysics Data System (ADS)
Costa, K. C.; Gomez, A. M. L.; Alonso, T. C.; Mourao, A. P.
2017-11-01
When ionizing radiation interacts with matter it generates energy deposition. Radiation dosimetry is important for medical applications of ionizing radiation due to the increasing demand for diagnostic radiology and radiotherapy. Different dosimetry methods are used and each one has its advantages and disadvantages. The film is a dose measurement method that records the energy deposition by the darkening of its emulsion. Radiochromic films have a little visible light sensitivity and respond better to ionizing radiation exposure. The aim of this study is to obtain the resulting calibration curve by the irradiation of radiochromic film strips, making it possible to relate the darkening of the film with the absorbed dose, in order to measure doses in experiments with X-ray beam of 120 kV, in computed tomography (CT). Film strips of GAFCHROMIC XR-QA2 were exposed according to RQT9 reference radiation, which defines an X-ray beam generated from a voltage of 120 kV. Strips were irradiated in "Laboratório de Calibração de Dosímetros do Centro de Desenvolvimento da Tecnologia Nuclear" (LCD / CDTN) at a dose range of 5-30 mGy, corresponding to the range values commonly used in CT scans. Digital images of the irradiated films were analyzed by using the ImageJ software. The darkening responses on film strips according to the doses were observed and they allowed obtaining the corresponding numeric values to the darkening for each specific dose value. From the numerical values of darkening, a calibration curve was obtained, which correlates the darkening of the film strip with dose values in mGy. The calibration curve equation is a simplified method for obtaining absorbed dose values using digital images of radiochromic films irradiated. With the calibration curve, radiochromic films may be applied on dosimetry in experiments on CT scans using X-ray beam of 120 kV, in order to improve CT acquisition image processes.
SU-E-T-137: The Response of TLD-100 in Mixed Fields of Photons and Electrons.
Lawless, M; Junell, S; Hammer, C; DeWerd, L
2012-06-01
Thermoluminescent dosimeters are used routinely for dosimetric measurements of photon and electron fields. However, no work has been published characterizing TLDs for use in combined photon and electron fields. This work investigates the response of TLD-100 (LiF:Mg,Ti) in mixed fields of photon and electron beam qualities. TLDs were irradiated in a 6 MV photon beam, 6 MeV electron beam, and a NIST traceable cobalt-60 beam. TLDs were also irradiated in a mixed field of the electron and photon beams. All irradiations were normalized to absorbed dose to water as defined in the AAPM TG-51 report. The average response per dose (nC/Gy) for each linac beam quality was normalized to the average response per dose of the TLDs irradiated by the cobalt-60 standard.Irradiations were performed in a water tank and a Virtual Water™ phantom. Two TLD dose calibration curves for determining absorbed dose to water were generated using photon and electron field TLD response data. These individual beam quality dose calibration curves were applied to the TLDs irradiated in the mixed field. The TLD response in the mixed field was less sensitive than the response in the photon field and more sensitive than the response in the electron field. TLD determination of dose in the mixed field using the dose calibration curve generated by TLDs irradiated by photons resulted in an underestimation of the delivered dose, while the use of a dose calibration curve generated using electrons resulted in an overestimation of the delivered dose. The relative response of TLD-100 in mixed fields fell consistently between the photon nd electron relative responses. When using TLD-100 in mixed fields, the user must account for this intermediate response to avoid an over- or underestimation of the dose due to calibration in a single photon or electron field. © 2012 American Association of Physicists in Medicine.
Switzer, P.; Harden, J.W.; Mark, R.K.
1988-01-01
A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas
2003-06-01
14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).
NASA Astrophysics Data System (ADS)
de Moraes, Alex Silva; Tech, Lohane; Melquíades, Fábio Luiz; Bastos, Rodrigo Oliveira
2014-11-01
Considering the importance to understand the behavior of the elements on different natural and/or anthropic processes, this study had as objective to verify the accuracy of a multielement analysis method for rocks characterization by using soil standards as calibration reference. An EDXRF equipment was used. The analyses were made on samples doped with known concentration of Mn, Zn, Rb, Sr and Zr, for the obtainment of the calibration curves, and on a certified rock sample to check the accuracy of the analytical curves. Then, a set of rock samples from Rio Bonito, located in Figueira city, Paraná State, Brazil, were analyzed. The concentration values obtained, in ppm, for Mn, Rb, Sr and Zr varied, respectively, from 175 to 1084, 7.4 to 268, 28 to 2247 and 15 to 761.
NASA Astrophysics Data System (ADS)
Łazarek, Łukasz; Antończak, Arkadiusz J.; Wójcik, Michał R.; Kozioł, Paweł E.; Stepak, Bogusz; Abramski, Krzysztof M.
2014-08-01
Laser-induced breakdown spectroscopy (LIBS) is a fast, fully optical method, that needs little or no sample preparation. In this technique qualitative and quantitative analysis is based on comparison. The determination of composition is generally based on the construction of a calibration curve namely the LIBS signal versus the concentration of the analyte. Typically, to calibrate the system, certified reference materials with known elemental composition are used. Nevertheless, such samples due to differences in the overall composition with respect to the used complex inorganic materials can influence significantly on the accuracy. There are also some intermediate factors which can cause imprecision in measurements, such as optical absorption, surface structure, thermal conductivity etc. This paper presents the calibration procedure performed with especially prepared pellets from the tested materials, which composition was previously defined. We also proposed methods of post-processing which allowed for mitigation of the matrix effects and for a reliable and accurate analysis. This technique was implemented for determination of trace elements in industrial copper concentrates standardized by conventional atomic absorption spectroscopy with a flame atomizer. A series of copper flotation concentrate samples was analyzed for contents of three elements, that is silver, cobalt and vanadium. It has been shown that the described technique can be used to qualitative and quantitative analyses of complex inorganic materials, such as copper flotation concentrates.
National-Scale Hydrologic Classification & Agricultural Decision Support: A Multi-Scale Approach
NASA Astrophysics Data System (ADS)
Coopersmith, E. J.; Minsker, B.; Sivapalan, M.
2012-12-01
Classification frameworks can help organize catchments exhibiting similarity in hydrologic and climatic terms. Focusing this assessment of "similarity" upon specific hydrologic signatures, in this case the annual regime curve, can facilitate the prediction of hydrologic responses. Agricultural decision-support over a diverse set of catchments throughout the United States depends upon successful modeling of the wetting/drying process without necessitating separate model calibration at every site where such insights are required. To this end, a holistic classification framework is developed to describe both climatic variability (humid vs. arid, winter rainfall vs. summer rainfall) and the draining, storing, and filtering behavior of any catchment, including ungauged or minimally gauged basins. At the national scale, over 400 catchments from the MOPEX database are analyzed to construct the classification system, with over 77% of these catchments ultimately falling into only six clusters. At individual locations, soil moisture models, receiving only rainfall as input, produce correlation values in excess of 0.9 with respect to observed soil moisture measurements. By deploying physical models for predicting soil moisture exclusively from precipitation that are calibrated at gauged locations, overlaying machine learning techniques to improve these estimates, then generalizing the calibration parameters for catchments in a given class, agronomic decision-support becomes available where it is needed rather than only where sensing data are located.lassifications of 428 U.S. catchments on the basis of hydrologic regime data, Coopersmith et al, 2012.
NASA Astrophysics Data System (ADS)
Saat, Ahmad; Hamzah, Zaini; Yusop, Mohammad Fariz; Zainal, Muhd Amiruddin
2010-07-01
Detection efficiency of a gamma-ray spectrometry system is dependent upon among others, energy, sample and detector geometry, volume and density of the samples. In the present study the efficiency calibration curves of newly acquired (August 2008) HPGe gamma-ray spectrometry system was carried out for four sample container geometries, namely Marinelli beaker, disc, cylindrical beaker and vial, normally used for activity determination of gamma-ray from environmental samples. Calibration standards were prepared by using known amount of analytical grade uranium trioxide ore, homogenized in plain flour into the respective containers. The ore produces gamma-rays of energy ranging from 53 keV to 1001 keV. Analytical grade potassium chloride were prepared to determine detection efficiency of 1460 keV gamma-ray emitted by potassium isotope K-40. Plots of detection efficiency against gamma-ray energy for the four sample geometries were found to fit smoothly to a general form of ɛ = AΕa+BΕb, where ɛ is efficiency, Ε is energy in keV, A, B, a and b are constants that are dependent on the sample geometries. All calibration curves showed the presence of a "knee" at about 180 keV. Comparison between the four geometries showed that the efficiency of Marinelli beaker is higher than cylindrical beaker and vial, while cylindrical disk showed the lowest.
Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification
Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...
Dilution Refrigerator for Nuclear Refrigeration and Cryogenic Thermometry Studies
NASA Astrophysics Data System (ADS)
Nakagawa, Hisashi; Hata, Tohru
2014-07-01
This study explores the design and construction of an ultra-low temperature facility in order to realize the Provisional low-temperature scale from 0.9 mK to 1 K (PLTS-2000) in Japan, to disseminate its use through calibration services, and to study thermometry at low temperatures below 1 K. To this end, a dilution refrigerator was constructed in-house that has four sintered silver discrete heat exchangers for use as a precooling stage of a copper nuclear demagnetization stage. A melting curve thermometer attached to the mixing chamber flange could be cooled continuously to 4.0 mK using the refrigerator. The dependence of minimum temperatures on circulation rates can be explained by the calculation of Frossati's formula based on a perfect continuous counterflow heat exchanger model, assuming that the Kapitza resistance has a temperature dependence. Residual heat leakage to the mixing chamber was estimated to be around 86 nW. A nuclear demagnetization cryostat with a nuclear stage containing an effective amount of copper (51 mol in a 9 T magnetic field) is under construction, and we will presently start to work toward the realization of the PLTS-2000. In this article, the design and performance of the dilution refrigerator are reported.
Lin, Jie; Carter, Corey A; McGlynn, Katherine A; Zahm, Shelia H; Nations, Joel A; Anderson, William F; Shriver, Craig D; Zhu, Kangmin
2015-12-01
Accurate prognosis assessment after non-small-cell lung cancer (NSCLC) diagnosis is an essential step for making effective clinical decisions. This study is aimed to develop a prediction model with routinely available variables to assess prognosis in patients with NSCLC in the U.S. Military Health System. We used the linked database from the Department of Defense's Central Cancer Registry and the Military Health System Data Repository. The data set was randomly and equally split into a training set to guide model development and a testing set to validate the model prediction. Stepwise Cox regression was used to identify predictors of survival. Model performance was assessed by calculating area under the receiver operating curves and construction of calibration plots. A simple risk scoring system was developed to aid quick risk score calculation and risk estimation for NSCLC clinical management. The study subjects were 5054 patients diagnosed with NSCLC between 1998 and 2007. Age, sex, tobacco use, tumor stage, histology, surgery, chemotherapy, peripheral vascular disease, cerebrovascular disease, and diabetes mellitus were identified as significant predictors of survival. Calibration showed high agreement between predicted and observed event rates. The area under the receiver operating curves reached 0.841, 0.849, 0.848, and 0.838 during 1, 2, 3, and 5 years, respectively. This is the first NSCLC prognosis model for quick risk assessment within the Military Health System. After external validation, the model can be translated into clinical use both as a web-based tool and through mobile applications easily accessible to physicians, patients, and researchers.
Use of multiple competitors for quantification of human immunodeficiency virus type 1 RNA in plasma.
Vener, T; Nygren, M; Andersson, A; Uhlén, M; Albert, J; Lundeberg, J
1998-07-01
Quantification of human immunodeficiency virus type 1 (HIV-1) RNA in plasma has rapidly become an important tool in basic HIV research and in the clinical care of infected individuals. Here, a quantitative HIV assay based on competitive reverse transcription-PCR with multiple competitors was developed. Four RNA competitors containing identical PCR primer binding sequences as the viral HIV-1 RNA target were constructed. One of the PCR primers was fluorescently labeled, which facilitated discrimination between the viral RNA and competitor amplicons by fragment analysis with conventional automated sequencers. The coamplification of known amounts of the RNA competitors provided the means to establish internal calibration curves for the individual reactions resulting in exclusion of tube-to-tube variations. Calibration curves were created from the peak areas, which were proportional to the starting amount of each competitor. The fluorescence detection format was expanded to provide a dynamic range of more than 5 log units. This quantitative assay allowed for reproducible analysis of samples containing as few as 40 viral copies of HIV-1 RNA per reaction. The within- and between-run coefficients of variation were <24% (range, 10 to 24) and <36% (range, 27 to 36), respectively. The high reproducibility (standard deviation, <0.13 log) of the overall procedure for quantification of HIV-1 RNA in plasma, including sample preparation, amplification, and detection variations, allowed reliable detection of a 0.5-log change in RNA viral load. The assay could be a useful tool for monitoring HIV-1 disease progression and antiviral treatment and can easily be adapted to the quantification of other pathogens.
Thermal Imaging Performance of TIR Onboard the Hayabusa2 Spacecraft
NASA Astrophysics Data System (ADS)
Arai, Takehiko; Nakamura, Tomoki; Tanaka, Satoshi; Demura, Hirohide; Ogawa, Yoshiko; Sakatani, Naoya; Horikawa, Yamato; Senshu, Hiroki; Fukuhara, Tetsuya; Okada, Tatsuaki
2017-07-01
The thermal infrared imager (TIR) is a thermal infrared camera onboard the Hayabusa2 spacecraft. TIR will perform thermography of a C-type asteroid, 162173 Ryugu (1999 JU3), and estimate its surface physical properties, such as surface thermal emissivity ɛ , surface roughness, and thermal inertia Γ, through remote in-situ observations in 2018 and 2019. In prelaunch tests of TIR, detector calibrations and evaluations, along with imaging demonstrations, were performed. The present paper introduces the experimental results of a prelaunch test conducted using a large-aperture collimator in conjunction with TIR under atmospheric conditions. A blackbody source, controlled at constant temperature, was measured using TIR in order to construct a calibration curve for obtaining temperatures from observed digital data. As a known thermal emissivity target, a sandblasted black almite plate warmed from the back using a flexible heater was measured by TIR in order to evaluate the accuracy of the calibration curve. As an analog target of a C-type asteroid, carbonaceous chondrites (50 mm × 2 mm in thickness) were also warmed from the back and measured using TIR in order to clarify the imaging performance of TIR. The calibration curve, which was fitted by a specific model of the Planck function, allowed for conversion to the target temperature within an error of 1°C (3σ standard deviation) for the temperature range of 30 to 100°C. The observed temperature of the black almite plate was consistent with the temperature measured using K-type thermocouples, within the accuracy of temperature conversion using the calibration curve when the temperature variation exhibited a random error of 0.3 °C (1σ ) for each pixel at a target temperature of 50°C. TIR can resolve the fine surface structure of meteorites, including cracks and pits with the specified field of view of 0.051°C (328 × 248 pixels). There were spatial distributions with a temperature variation of 3°C at the setting temperature of 50°C in the thermal images obtained by TIR. If the spatial distribution of the temperature is caused by the variation of the thermal emissivity, including the effects of the surface roughness, the difference of the thermal emissivity Δ ɛ is estimated to be approximately 0.08, as calculated by the Stefan-Boltzmann raw. Otherwise, if the distribution of temperature is caused by the variation of the thermal inertia, the difference of the thermal inertia Δ Γ is calculated to be approximately 150 J m^{-2} s^{0.5} K^{-1}, based on a simulation using a 20-layer model of the heat balance equation. The imaging performance of TIR based on the results of the meteorite experiments indicates that TIR can resolve the spatial distribution of thermal emissivity and thermal inertia of the asteroid surface within accuracies of Δ ɛ \\cong 0.02 and Δ Γ \\cong 20 J m^{-2} s^{0.5} K^{-1}, respectively. However, the effects of the thermal emissivity and thermal inertia will degenerate in thermal images of TIR. Therefore, TIR will observe the same areas of the asteroid surface numerous times ({>}10 times, in order to ensure statistical significance), which allows us to determine both the parameters of the surface thermal emissivity and the thermal inertia by least-squares fitting to a thermal model of Ryugu.
Holographic Entanglement Entropy, SUSY & Calibrations
NASA Astrophysics Data System (ADS)
Colgáin, Eoin Ó.
2018-01-01
Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.
NASA Astrophysics Data System (ADS)
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang
2015-10-01
GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90% ± 1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration curve.
NASA Astrophysics Data System (ADS)
Zheng, Lijuan; Cao, Fan; Xiu, Junshan; Bai, Xueshi; Motto-Ros, Vincent; Gilon, Nicole; Zeng, Heping; Yu, Jin
2014-09-01
Laser-induced breakdown spectroscopy (LIBS) provides a technique to directly determine metals in viscous liquids and especially in lubricating oils. A specific laser ablation configuration of a thin layer of oil applied on the surface of a pure aluminum target was used to evaluate the analytical figures of merit of LIBS for elemental analysis of lubricating oils. Among the analyzed oils, there were a certified 75cSt blank mineral oil, 8 virgin lubricating oils (synthetic, semi-synthetic, or mineral and of 2 different manufacturers), 5 used oils (corresponding to 5 among the 8 virgin oils), and a cooking oil. The certified blank oil and 4 virgin lubricating oils were spiked with metallo-organic standards to obtain laboratory reference samples with different oil matrix. We first established calibration curves for 3 elements, Fe, Cr, Ni, with the 5 sets of laboratory reference samples in order to evaluate the matrix effect by the comparison among the different oils. Our results show that generalized calibration curves can be built for the 3 analyzed elements by merging the measured line intensities of the 5 sets of spiked oil samples. Such merged calibration curves with good correlation of the merged data are only possible if no significant matrix effect affects the measurements of the different oils. In the second step, we spiked the remaining 4 virgin oils and the cooking oils with Fe, Cr and Ni. The accuracy and the precision of the concentration determination in these prepared oils were then evaluated using the generalized calibration curves. The concentrations of metallic elements in the 5 used lubricating oils were finally determined.
A comparison of solute-transport solution techniques based on inverse modelling results
Mehl, S.; Hill, M.C.
2000-01-01
Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.
An Investigation of Acoustic Cavitation Produced by Pulsed Ultrasound
1987-12-01
S~ PVDF Hydrophone Sensitivity Calibration Curves C. DESCRIPTION OF TEST AND CALIBRATION TECHNIQUE We chose the reciprocity technique for calibration...NAVAL POSTGRADUATE SCHOOLN a n Monterey, Calif ornia ITHESIS AN INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED BY PULSED ULTRASOUND by Robert L. Bruce...INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED B~Y PULSED ULTRASOUND !2 PERSONAL AUTHOR(S) .RR~r. g~rtL_ 1DLJN, Rober- ., Jr. 13a TYPE OF REPORT )3b TIME
NASA Technical Reports Server (NTRS)
Allen, John
2001-01-01
Solid phase microextraction (SPME) will be used to develop a method for detecting monomethyl hydrazine (MMH) and hydrazine (Hz). A derivatizing agent, pentafluorobenzoyl chloride (PFBCI), is known to react readily with MMH and Hz. The SPME fiber can either be coated with PFBCl and introduced into a gaseous stream containing MMH, or PFBCl and MMH can react first in a syringe barrel and after a short equilibration period a SPME is used to sample the resulting solution. These methods were optimized and compared. Because Hz and MMH can degrade the SPME, letting the reaction occur first gave better results. Only MMH could be detected using either of these methods. Future research will concentrate on constructing calibration curves and determining the detection limit.
Radiance calibration of the High Altitude Observatory white-light coronagraph on Skylab
NASA Technical Reports Server (NTRS)
Poland, A. I.; Macqueen, R. M.; Munro, R. H.; Gosling, J. T.
1977-01-01
The processing of over 35,000 photographs of the solar corona obtained by the white-light coronograph on Skylab is described. Calibration of the vast amount of data was complicated by temporal effects of radiation fog and latent image loss. These effects were compensated by imaging a calibration step wedge on each data frame. Absolute calibration of the wedge was accomplished through comparison with a set of previously calibrated glass opal filters. Analysis employed average characteristic curves derived from measurements of step wedges from many frames within a given camera half-load. The net absolute accuracy of a given radiance measurement is estimated to be 20%.
ERIC Educational Resources Information Center
Kynigos, Chronis; Psycharis, Georgos
2003-01-01
We explore how 13 year-olds construct meanings around the notion of curvature in their classroom while working with software that combines symbolic notation to construct geometrical figures with dynamic manipulation of variable. The ideas of curve as intrinsic dynamic construction, and curve as object with properties related to its positioning on…
The S-curve for forecasting waste generation in construction projects.
Lu, Weisheng; Peng, Yi; Chen, Xi; Skitmore, Martin; Zhang, Xiaoling
2016-10-01
Forecasting construction waste generation is the yardstick of any effort by policy-makers, researchers, practitioners and the like to manage construction and demolition (C&D) waste. This paper develops and tests an S-curve model to indicate accumulative waste generation as a project progresses. Using 37,148 disposal records generated from 138 building projects in Hong Kong in four consecutive years from January 2011 to June 2015, a wide range of potential S-curve models are examined, and as a result, the formula that best fits the historical data set is found. The S-curve model is then further linked to project characteristics using artificial neural networks (ANNs) so that it can be used to forecast waste generation in future construction projects. It was found that, among the S-curve models, cumulative logistic distribution is the best formula to fit the historical data. Meanwhile, contract sum, location, public-private nature, and duration can be used to forecast construction waste generation. The study provides contractors with not only an S-curve model to forecast overall waste generation before a project commences, but also with a detailed baseline to benchmark and manage waste during the course of construction. The major contribution of this paper is to the body of knowledge in the field of construction waste generation forecasting. By examining it with an S-curve model, the study elevates construction waste management to a level equivalent to project cost management where the model has already been readily accepted as a standard tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
Investigating quantitation of phosphorylation using MALDI-TOF mass spectrometry.
Parker, Laurie; Engel-Hall, Aaron; Drew, Kevin; Steinhardt, George; Helseth, Donald L; Jabon, David; McMurry, Timothy; Angulo, David S; Kron, Stephen J
2008-04-01
Despite advances in methods and instrumentation for analysis of phosphopeptides using mass spectrometry, it is still difficult to quantify the extent of phosphorylation of a substrate because of physiochemical differences between unphosphorylated and phosphorylated peptides. Here we report experiments to investigate those differences using MALDI-TOF mass spectrometry for a set of synthetic peptides by creating calibration curves of known input ratios of peptides/phosphopeptides and analyzing their resulting signal intensity ratios. These calibration curves reveal subtleties in sequence-dependent differences for relative desorption/ionization efficiencies that cannot be seen from single-point calibrations. We found that the behaviors were reproducible with a variability of 5-10% for observed phosphopeptide signal. Although these data allow us to begin addressing the issues related to modeling these properties and predicting relative signal strengths for other peptide sequences, it is clear that this behavior is highly complex and needs to be further explored. John Wiley & Sons, Ltd
Investigating quantitation of phosphorylation using MALDI-TOF mass spectrometry
Parker, Laurie; Engel-Hall, Aaron; Drew, Kevin; Steinhardt, George; Helseth, Donald L.; Jabon, David; McMurry, Timothy; Angulo, David S.; Kron, Stephen J.
2010-01-01
Despite advances in methods and instrumentation for analysis of phosphopeptides using mass spectrometry, it is still difficult to quantify the extent of phosphorylation of a substrate due to physiochemical differences between unphosphorylated and phosphorylated peptides. Here we report experiments to investigate those differences using MALDI-TOF mass spectrometry for a set of synthetic peptides by creating calibration curves of known input ratios of peptides/phosphopeptides and analyzing their resulting signal intensity ratios. These calibration curves reveal subtleties in sequence-dependent differences for relative desorption/ionization efficiencies that cannot be seen from single-point calibrations. We found that the behaviors were reproducible with a variability of 5–10% for observed phosphopeptide signal. Although these data allow us to begin addressing the issues related to modeling these properties and predicting relative signal strengths for other peptide sequences, it is clear this behavior is highly complex and needs to be further explored. PMID:18064576
Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary
2018-08-01
Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.
Crash prediction modeling for curved segments of rural two-lane two-way highways in Utah.
DOT National Transportation Integrated Search
2015-10-01
This report contains the results of the development of crash prediction models for curved segments of rural : two-lane two-way highways in the state of Utah. The modeling effort included the calibration of the predictive : model found in the Highway ...
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, David R.; Brown, Richard S.; Lepla, Ken
One of the practical problems with quantifying the amount of energy used by fish implanted with electromyogram (EMG) radio transmitters is that the signals emitted by the transmitter provide only a relative index of activity unless they are calibrated to the swimming speed of the fish. Ideally calibration would be conducted for each fish before it is released, but this is often not possible and calibration curves derived from more than one fish are used to interpret EMG signals from individuals which have not been calibrated. We tested the validity of this approach by comparing EMG data within three groupsmore » of three wild juvenile white sturgeon Acipenser transmontanus implanted with the same EMG radio transmitter. We also tested an additional six fish which were implanted with separate EMG transmitters. Within each group, a single EMG radio transmitter usually did not produce similar results in different fish. Grouping EMG signals among fish produced less accurate results than having individual EMG-swim speed relationships for each fish. It is unknown whether these differences were a result of different swimming performances among individual fish or inconsistencies in the placement or function of the EMG transmitters. In either case, our results suggest that caution should be used when applying calibration curves from one group of fish to another group of uncalibrated fish.« less
An update on 'dose calibrator' settings for nuclides used in nuclear medicine.
Bergeron, Denis E; Cessna, Jeffrey T
2018-06-01
Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.
The use of megavoltage CT (MVCT) images for dose recomputations
NASA Astrophysics Data System (ADS)
Langen, K. M.; Meeks, S. L.; Poole, D. O.; Wagner, T. H.; Willoughby, T. R.; Kupelian, P. A.; Ruchala, K. J.; Haimerl, J.; Olivera, G. H.
2005-09-01
Megavoltage CT (MVCT) images of patients are acquired daily on a helical tomotherapy unit (TomoTherapy, Inc., Madison, WI). While these images are used primarily for patient alignment, they can also be used to recalculate the treatment plan for the patient anatomy of the day. The use of MVCT images for dose computations requires a reliable CT number to electron density calibration curve. In this work, we tested the stability of the MVCT numbers by determining the variation of this calibration with spatial arrangement of the phantom, time and MVCT acquisition parameters. The two calibration curves that represent the largest variations were applied to six clinical MVCT images for recalculations to test for dosimetric uncertainties. Among the six cases tested, the largest difference in any of the dosimetric endpoints was 3.1% but more typically the dosimetric endpoints varied by less than 2%. Using an average CT to electron density calibration and a thorax phantom, a series of end-to-end tests were run. Using a rigid phantom, recalculated dose volume histograms (DVHs) were compared with plan DVHs. Using a deformed phantom, recalculated point dose variations were compared with measurements. The MVCT field of view is limited and the image space outside this field of view can be filled in with information from the planning kVCT. This merging technique was tested for a rigid phantom. Finally, the influence of the MVCT slice thickness on the dose recalculation was investigated. The dosimetric differences observed in all phantom tests were within the range of dosimetric uncertainties observed due to variations in the calibration curve. The use of MVCT images allows the assessment of daily dose distributions with an accuracy that is similar to that of the initial kVCT dose calculation.
Pedicle screw versus hybrid posterior instrumentation for dystrophic neurofibromatosis scoliosis.
Wang, Jr-Yi; Lai, Po-Liang; Chen, Wen-Jer; Niu, Chi-Chien; Tsai, Tsung-Ting; Chen, Lih-Huei
2017-06-01
Surgical management of severe rigid dystrophic neurofibromatosis (NF) scoliosis is technically demanding and produces varying results. In the current study, we reviewed 9 patients who were treated with combined anterior and posterior fusion using different types of instrumentation (i.e., pedicle screw, hybrid, and all-hook constructs) at our institute.Between September 2001 and July 2010 at our institute, 9 patients received anterior release/fusion and posterior fusion with different types of instrumentation, including a pedicle screw construct (n = 5), a hybrid construct (n = 3), and an all-hook construct (n = 1). We compared the pedicle screw group with the hybrid group to analyze differences in preoperative curve angle, immediate postoperative curve reduction, and latest follow-up curve angle.The mean follow-up period was 9.5 ± 2.9 years. The average age at surgery was 10.3 ± 3.9 years. The average preoperative scoliosis curve was 61.3 ± 13.8°, and the average preoperative kyphosis curve was 39.8 ± 19.7°. The average postoperative scoliosis and kyphosis curves were 29.7 ± 10.7° and 21.0 ± 13.5°, respectively. The most recent follow-up scoliosis and kyphosis curves were 43.4 ± 17.3° and 29.4 ± 18.9°, respectively. There was no significant difference in the correction angle (either coronal or sagittal), and there was no significant difference in the loss of sagittal correction between the pedicle screw construct group and the hybrid construct group. However, the patients who received pedicle screw constructs had significantly less loss of coronal correction (P < .05). Two patients with posterior instrumentation, one with an all-hook construct and the other with a hybrid construct, required surgical revision because of progression of deformity.It is difficult to intraoperatively correct dystrophic deformity and to maintain this correction after surgery. Combined anterior release/fusion and posterior fusion using either a pedicle screw construct or a hybrid construct provide similar curve corrections both sagittally and coronally. After long-term follow-up, sagittal correction was maintained with both constructs. However, patients treated with posterior instrumentation using pedicle screw constructs had significantly less loss of coronal correction.
Pedicle screw versus hybrid posterior instrumentation for dystrophic neurofibromatosis scoliosis
Wang, Jr-Yi; Lai, Po-Liang; Chen, Wen-Jer; Niu, Chi-Chien; Tsai, Tsung-Ting; Chen, Lih-Huei
2017-01-01
Abstract Surgical management of severe rigid dystrophic neurofibromatosis (NF) scoliosis is technically demanding and produces varying results. In the current study, we reviewed 9 patients who were treated with combined anterior and posterior fusion using different types of instrumentation (i.e., pedicle screw, hybrid, and all-hook constructs) at our institute. Between September 2001 and July 2010 at our institute, 9 patients received anterior release/fusion and posterior fusion with different types of instrumentation, including a pedicle screw construct (n = 5), a hybrid construct (n = 3), and an all-hook construct (n = 1). We compared the pedicle screw group with the hybrid group to analyze differences in preoperative curve angle, immediate postoperative curve reduction, and latest follow-up curve angle. The mean follow-up period was 9.5 ± 2.9 years. The average age at surgery was 10.3 ± 3.9 years. The average preoperative scoliosis curve was 61.3 ± 13.8°, and the average preoperative kyphosis curve was 39.8 ± 19.7°. The average postoperative scoliosis and kyphosis curves were 29.7 ± 10.7° and 21.0 ± 13.5°, respectively. The most recent follow-up scoliosis and kyphosis curves were 43.4 ± 17.3° and 29.4 ± 18.9°, respectively. There was no significant difference in the correction angle (either coronal or sagittal), and there was no significant difference in the loss of sagittal correction between the pedicle screw construct group and the hybrid construct group. However, the patients who received pedicle screw constructs had significantly less loss of coronal correction (P < .05). Two patients with posterior instrumentation, one with an all-hook construct and the other with a hybrid construct, required surgical revision because of progression of deformity. It is difficult to intraoperatively correct dystrophic deformity and to maintain this correction after surgery. Combined anterior release/fusion and posterior fusion using either a pedicle screw construct or a hybrid construct provide similar curve corrections both sagittally and coronally. After long-term follow-up, sagittal correction was maintained with both constructs. However, patients treated with posterior instrumentation using pedicle screw constructs had significantly less loss of coronal correction. PMID:28562548
Jover-Esplá, Ana Gabriela; Palazón-Bru, Antonio; Folgado-de la Rosa, David Manuel; Severá-Ferrándiz, Guillermo; Sancho-Mestre, Manuela; de Juan-Herrero, Joaquín; Gil-Guillén, Vicente Francisco
2018-05-01
The existing predictive models of laryngeal cancer recurrence present limitations for clinical practice. Therefore, we constructed, internally validated and implemented in a mobile application (Android) a new model based on a points system taking into account the internationally recommended statistical methodology. This longitudinal prospective study included 189 patients with glottic cancer in 2004-2016 in a Spanish region. The main variable was time-to-recurrence, and its potential predictors were: age, gender, TNM classification, stage, smoking, alcohol consumption, and histology. A points system was developed to predict five-year risk of recurrence based on a Cox model. This was validated internally by bootstrapping, determining discrimination (C-statistics) and calibration (smooth curves). A total of 77 patients presented recurrence (40.7%) in a mean follow-up period of 3.4 ± 3.0 years. The factors in the model were: age, lymph node stage, alcohol consumption and stage. Discrimination and calibration were satisfactory. A points system was developed to obtain the probability of recurrence of laryngeal glottic cancer in five years, using five clinical variables. Our system should be validated externally in other geographical areas. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Deviation rectification for dynamic measurement of rail wear based on coordinate sets projection
NASA Astrophysics Data System (ADS)
Wang, Chao; Ma, Ziji; Li, Yanfu; Zeng, Jiuzhen; Jin, Tan; Liu, Hongli
2017-10-01
Dynamic measurement of rail wear using a laser imaging system suffers from random vibrations in the laser-based imaging sensor which cause distorted rail profiles. In this paper, a simple and effective method for rectifying profile deviation is presented to address this issue. There are two main steps: profile recognition and distortion calibration. According to the constant camera and projector parameters, efficient recognition of measured profiles is achieved by analyzing the geometric difference between normal profiles and distorted ones. For a distorted profile, by constructing coordinate sets projecting from it to the standard one on triple projecting primitives, including the rail head inner line, rail waist curve and rail jaw, iterative extrinsic camera parameter self-compensation is implemented. The distortion is calibrated by projecting the distorted profile onto the x-y plane of a measuring coordinate frame, which is parallel to the rail cross section, to eliminate the influence of random vibrations in the laser-based imaging sensor. As well as evaluating the implementation with comprehensive experiments, we also compare our method with other published works. The results exhibit the effectiveness and superiority of our method for the dynamic measurement of rail wear.
A scattering methodology for droplet sizing of e-cigarette aerosols.
Pratte, Pascal; Cosandey, Stéphane; Goujon-Ginglinger, Catherine
2016-10-01
Knowledge of the droplet size distribution of inhalable aerosols is important to predict aerosol deposition yield at various respiratory tract locations in human. Optical methodologies are usually preferred over the multi-stage cascade impactor for high-throughput measurements of aerosol particle/droplet size distributions. Evaluate the Laser Aerosol Spectrometer technology based on Polystyrene Sphere Latex (PSL) calibration curve applied for the experimental determination of droplet size distributions in the diameter range typical of commercial e-cigarette aerosols (147-1361 nm). This calibration procedure was tested for a TSI Laser Aerosol Spectrometer (LAS) operating at a wavelength of 633 nm and assessed against model di-ethyl-hexyl-sebacat (DEHS) droplets and e-cigarette aerosols. The PSL size response was measured, and intra- and between-day standard deviations calculated. DEHS droplet sizes were underestimated by 15-20% by the LAS when the PSL calibration curve was used; however, the intra- and between-day relative standard deviations were < 3%. This bias is attributed to the fact that the index of refraction of PSL calibrated particles is different in comparison to test aerosols. This 15-20% does not include the droplet evaporation component, which may reduce droplet size prior a measurement is performed. Aerosol concentration was measured accurately with a maximum uncertainty of 20%. Count median diameters and mass median aerodynamic diameters of selected e-cigarette aerosols ranged from 130-191 nm to 225-293 nm, respectively, similar to published values. The LAS instrument can be used to measure e-cigarette aerosol droplet size distributions with a bias underestimating the expected value by 15-20% when using a precise PSL calibration curve. Controlled variability of DEHS size measurements can be achieved with the LAS system; however, this method can only be applied to test aerosols having a refractive index close to that of PSL particles used for calibration.
Measurement and models of bent KAP(001) crystal integrated reflectivity and resolution (invited)
NASA Astrophysics Data System (ADS)
Loisel, G. P.; Wu, M.; Stolte, W.; Kruschwitz, C.; Lake, P.; Dunham, G. S.; Bailey, J. E.; Rochau, G. A.
2016-11-01
The Advanced Light Source beamline-9.3.1 x-rays are used to calibrate the rocking curve of bent potassium acid phthalate (KAP) crystals in the 2.3-4.5 keV photon-energy range. Crystals are bent on a cylindrically convex substrate with a radius of curvature ranging from 2 to 9 in. and also including the flat case to observe the effect of bending on the KAP spectrometric properties. As the bending radius increases, the crystal reflectivity converges to the mosaic crystal response. The X-ray Oriented Programs (xop) multi-lamellar model of bent crystals is used to model the rocking curve of these crystals and the calibration data confirm that a single model is adequate to reproduce simultaneously all measured integrated reflectivities and rocking-curve FWHM for multiple radii of curvature in both 1st and 2nd order of diffraction.
Measurement and models of bent KAP(001) crystal integrated reflectivity and resolution (invited)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loisel, G. P., E-mail: gploise@sandia.gov; Wu, M.; Lake, P.
2016-11-15
The Advanced Light Source beamline-9.3.1 x-rays are used to calibrate the rocking curve of bent potassium acid phthalate (KAP) crystals in the 2.3-4.5 keV photon-energy range. Crystals are bent on a cylindrically convex substrate with a radius of curvature ranging from 2 to 9 in. and also including the flat case to observe the effect of bending on the KAP spectrometric properties. As the bending radius increases, the crystal reflectivity converges to the mosaic crystal response. The X-ray Oriented Programs (XOP) multi-lamellar model of bent crystals is used to model the rocking curve of these crystals and the calibration datamore » confirm that a single model is adequate to reproduce simultaneously all measured integrated reflectivities and rocking-curve FWHM for multiple radii of curvature in both 1st and 2nd order of diffraction.« less
Hondrogiannis, Ellen M; Ehrlinger, Erin; Poplaski, Alyssa; Lisle, Meredith
2013-11-27
A total of 11 elements found in 25 vanilla samples from Uganda, Madagascar, Indonesia, and Papua New Guinea were measured by laser ablation-inductively coupled plasma-time-of-flight-mass spectrometry (LA-ICP-TOF-MS) for the purpose of collecting data that could be used to discriminate among the origins. Pellets were prepared of the samples, and elemental concentrations were obtained on the basis of external calibration curves created using five National Institute of Standards and Technology (NIST) standards and one Chinese standard with (13)C internal standardization. These curves were validated using NIST 1573a (tomato leaves) as a check standard. Discriminant analysis was used to successfully classify the vanilla samples by their origin. Our method illustrates the feasibility of using LA-ICP-TOF-MS with an external calibration curve for high-throughput screening of spice screening analysis.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
Assessing and calibrating the ATR-FTIR approach as a carbonate rock characterization tool
NASA Astrophysics Data System (ADS)
Henry, Delano G.; Watson, Jonathan S.; John, Cédric M.
2017-01-01
ATR-FTIR (attenuated total reflectance Fourier transform infrared) spectroscopy can be used as a rapid and economical tool for qualitative identification of carbonates, calcium sulphates, oxides and silicates, as well as quantitatively estimating the concentration of minerals. Over 200 powdered samples with known concentrations of two, three, four and five phase mixtures were made, then a suite of calibration curves were derived that can be used to quantify the minerals. The calibration curves in this study have an R2 that range from 0.93-0.99, a RMSE (root mean square error) of 1-5 wt.% and a maximum error of 3-10 wt.%. The calibration curves were used on 35 geological samples that have previously been studied using XRD (X-ray diffraction). The identification of the minerals using ATR-FTIR is comparable with XRD and the quantitative results have a RMSD (root mean square deviation) of 14% and 12% for calcite and dolomite respectively when compared to XRD results. ATR-FTIR is a rapid technique (identification and quantification takes < 5 min) that involves virtually no cost if the machine is available. It is a common tool in most analytical laboratories, but it also has the potential to be deployed on a rig for real-time data acquisition of the mineralogy of cores and rock chips at the surface as there is no need for special sample preparation, rapid data collection and easy analysis.
Risk scores for outcome in bacterial meningitis: Systematic review and external validation study.
Bijlsma, Merijn W; Brouwer, Matthijs C; Bossuyt, Patrick M; Heymans, Martijn W; van der Ende, Arie; Tanck, Michael W T; van de Beek, Diederik
2016-11-01
To perform an external validation study of risk scores, identified through a systematic review, predicting outcome in community-acquired bacterial meningitis. MEDLINE and EMBASE were searched for articles published between January 1960 and August 2014. Performance was evaluated in 2108 episodes of adult community-acquired bacterial meningitis from two nationwide prospective cohort studies by the area under the receiver operating characteristic curve (AUC), the calibration curve, calibration slope or Hosmer-Lemeshow test, and the distribution of calculated risks. Nine risk scores were identified predicting death, neurological deficit or death, or unfavorable outcome at discharge in bacterial meningitis, pneumococcal meningitis and invasive meningococcal disease. Most studies had shortcomings in design, analyses, and reporting. Evaluation showed AUCs of 0.59 (0.57-0.61) and 0.74 (0.71-0.76) in bacterial meningitis, 0.67 (0.64-0.70) in pneumococcal meningitis, and 0.81 (0.73-0.90), 0.82 (0.74-0.91), 0.84 (0.75-0.93), 0.84 (0.76-0.93), 0.85 (0.75-0.95), and 0.90 (0.83-0.98) in meningococcal meningitis. Calibration curves showed adequate agreement between predicted and observed outcomes for four scores, but statistical tests indicated poor calibration of all risk scores. One score could be recommended for the interpretation and design of bacterial meningitis studies. None of the existing scores performed well enough to recommend routine use in individual patient management. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Tobin, M F; Pratt, R B; Jacobsen, A L; De Guzman, M E
2013-05-01
Vulnerability to cavitation curves describe the decrease in xylem hydraulic conductivity as xylem pressure declines. Several techniques for constructing vulnerability curves use centrifugal force to induce negative xylem pressure in stem or root segments. Centrifuge vulnerability curves constructed for long-vesselled species have been hypothesised to overestimate xylem vulnerability to cavitation due to increased vulnerability of vessels cut open at stem ends that extend to the middle or entirely through segments. We tested two key predictions of this hypothesis: (i) centrifugation induces greater embolism than dehydration in long-vesselled species, and (ii) the proportion of open vessels changes centrifuge vulnerability curves. Centrifuge and dehydration vulnerability curves were compared for a long- and short-vesselled species. The effect of open vessels was tested in four species by comparing centrifuge vulnerability curves for stems of two lengths. Centrifuge and dehydration vulnerability curves agreed well for the long- and short-vesselled species. Centrifuge vulnerability curves constructed using two stem lengths were similar. Also, the distribution of embolism along the length of centrifuged stems matched the theoretical pressure profile induced by centrifugation. We conclude that vulnerability to cavitation can be accurately characterised with vulnerability curves constructed using a centrifuge technique, even in long-vesselled species. © 2012 German Botanical Society and The Royal Botanical Society of the Netherlands.
Construction of an electrode modified with gallium(III) for voltammetric detection of ovalbumin.
Sugawara, Kazuharu; Okusawa, Makoto; Takano, Yusaku; Kadoya, Toshihiko
2014-01-01
Electrodes modified with gallium(III) complexes were constructed to detect ovalbumin (OVA). For immobilization of a gallium(III)-nitrilotriacetate (NTA) complex, the electrode was first covered with collagen film. After the amino groups of the film had reacted with isothiocyanobenzyl-NTA, the gallium(III) was then able to combine with the NTA moieties. Another design featured an electrode cast with a gallium(III)-acetylacetonate (AA) complex. The amount of gallium(III) in the NTA complex was equivalent to one-quarter of the gallium(III) that could be utilized from an AA complex. However, the calibration curves of OVA using gallium(III)-NTA and gallium(III)-AA complexes were linear in the ranges of 7.0 × 10(-11) - 3.0 × 10(-9) M and 5.0 × 10(-10) - 8.0 × 10(-9) M, respectively. The gallium(III) on the electrode with NTA complex had high flexibility due to the existence of a spacer between the NTA and the collagen film, and, therefore, the reactivity of the gallium(III) to OVA was superior to that of the gallium(III)-AA complex with no spacer.
Ibrahim, Fawzia; El-Enany, Nahed; El-Shaheny, Rania N; Mikhail, Ibraam E
2015-01-01
The first HPLC method was developed for the simultaneous determination of paracetamol (PC), ascorbic acid (AA), and pseudoephedrine HCl (PE) in their co-formulated tablets. Separation was achieved on a C18 column in 5 min using a mobile phase composed of methanol-0.05 M phosphate buffer (35:65, v/v) at pH 2.5 with UV detection at 220 nm. Linear calibration curves were constructed over concentration ranges of 1.0 - 50.0, 3.0 - 60.0 and 3.0 - 80.0 μg mL(-1) for PC, AA, and PE, respectively. The method was validated and applied for the simultaneous determination of these drugs in their tablets with average % recoveries of 101.17 ± 0.67, 98.34 ± 0.77, and 98.95 ± 1.11%, for PC, AA, and PE, respectively. The proposed method was also used to construct in vitro dissolution profiles of the co-formulated tablets containing the three drugs.
NASA Astrophysics Data System (ADS)
Díaz, Daniel; Molina, Alejandro; Hahn, David
2018-07-01
The influence of laser irradiance and wavelength on the analysis of gold and silver in ore and surrogate samples with laser-induced breakdown spectroscopy (LIBS) was evaluated. Gold-doped mineral samples (surrogates) and ore samples containing naturally-occurring gold and silver were analyzed with LIBS using 1064 and 355 nm laser wavelengths at irradiances from 0.36 × 109 to 19.9 × 109 W/cm2 and 0.97 × 109 to 4.3 × 109 W/cm2, respectively. The LIBS net, background and signal-to-background signals were analyzed. For all irradiances, wavelengths, samples and analytes the calibration curves behaved linearly for concentrations from 1 to 9 μg/g gold (surrogate samples) and 0.7 to 47.0 μg/g silver (ore samples). However, it was not possible to prepare calibration curves for gold-bearing ore samples (at any concentration) nor for gold-doped surrogate samples with gold concentrations below 1 μg/g. Calibration curve parameters for gold-doped surrogate samples were statistically invariant at 1064 and 355 nm. Contrary, the Ag-ore analyte showed higher emission intensity at 1064 nm, but the signal-to-background normalization reduced the effect of laser wavelength of silver calibration plots. The gold-doped calibration curve metrics improved at higher laser irradiance, but that did not translate into lower limits of detection. While coefficients of determination (R2) and limits of detection did not vary significantly with laser wavelength, the LIBS repeatability at 355 nm improved up to a 50% with respect to that at 1064 nm. Plasma diagnostics by the Boltzmann and Stark broadening methods showed that the plasma temperature and electron density did not follow a specific trend as the wavelength changed for the delay and gate times used. This research presents supporting evidence that the LIBS discrete sampling features combined with the discrete and random distribution of gold in minerals hinder gold analysis by LIBS in ore samples; however, the use of higher laser irradiances at 1064 nm increased the probability of sampling and detecting naturally-occurring gold.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur
2016-05-01
In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.
Code of Federal Regulations, 2014 CFR
2014-07-01
... native AOI concentration (ppm) of the effluent during stable conditions. (14) Post-test calibration. At... or removal efficiencies must be determined while etching a substrate (product, dummy, or test). For... curves for the subsequent destruction or removal efficiency tests. (8) Mass location calibration. A...
Vosough, Maryam; Mohamedian, Hadi; Salemi, Amir; Baheri, Tahmineh
2015-02-01
In the present study, a simple strategy based on solid-phase extraction (SPE) with a cation exchange sorbent (Finisterre SCX) followed by fast high-performance liquid chromatography (HPLC) with diode array detection coupled with chemometrics tools has been proposed for the determination of methamphetamine and pseudoephedrine in ground water and river water. At first, the HPLC and SPE conditions were optimized and the analytical performance of the method was determined. In the case of ground water, determination of analytes was successfully performed through univariate calibration curves. For river water sample, multivariate curve resolution and alternating least squares was implemented and the second-order advantage was achieved in samples containing uncalibrated interferences and uncorrected background signals. The calibration curves showed good linearity (r(2) > 0.994).The limits of detection for pseudoephedrine and methamphetamine were 0.06 and 0.08 μg/L and the average recovery values were 104.7 and 102.3% in river water, respectively. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Dependence of magnetic permeability on residual stresses in alloyed steels
NASA Astrophysics Data System (ADS)
Hristoforou, E.; Ktena, A.; Vourna, P.; Argiris, K.
2018-04-01
A method for the monitoring of residual stress distribution in steels has been developed based on non-destructive surface magnetic permeability measurements. In order to investigate the potential utilization of the magnetic method in evaluating residual stresses, the magnetic calibration curves of various ferromagnetic alloyed steels' grade (AISI 4140, TRIP and Duplex) were examined. X-Ray diffraction technique was used for determining surface residual stress values. The overall measurement results have shown that the residual stress determined by the magnetic method was in good agreement with the diffraction results. Further experimental investigations are required to validate the preliminary results and to verify the presence of a unique normalized magnetic stress calibration curve.
Antenna Calibration and Measurement Equipment
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Cortes, Manuel Vazquez
2012-01-01
A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.
Matching between the light spots and lenslets of an artificial compound eye system
NASA Astrophysics Data System (ADS)
He, Jianzheng; Jian, Huijie; Zhu, Qitao; Ma, Mengchao; Wang, Keyi
2017-10-01
As the visual organ of many arthropods, the compound eye has attracted a lot of attention with the advantage of wide field-of-view, multi-channel imaging ability and high agility. Extended from this concept, a new kind of artificial compound eye device is developed. There are 141 lenslets which share one image sensor distributed evenly on a curved surface, thus it is difficult to distinguish the lenslets which the light spot belongs to during calibration and positioning process. Therefore, the matching algorithm is proposed based on the device structure and the principle of calibration and positioning. Region partition of lenslet array is performed at first. Each lenslet and its adjacent lenslets are defined as cluster eyes and constructed into an index table. In the calibration process, a polar coordinate system is established, and the matching can be accomplished by comparing the rotary table position in the polar coordinate system and the central light spot angle in the image. In the positioning process, the spot is paired to the correct region according to the spots distribution firstly, and the final results is determined by the dispersion of the distance from the target point to the incident ray in the region traversal matching. Finally, the experiment results show that the presented algorithms provide a feasible and efficient way to match the spot to the lenslet, and perfectly meet the needs in the practical application of the compound eye system.
NASA Astrophysics Data System (ADS)
Burk, D. R.; Mackey, K. G.; Hartse, H. E.
2016-12-01
We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in regional networks throughout Russia and in Central Asia.
González, Oskar; van Vliet, Michael; Damen, Carola W N; van der Kloet, Frans M; Vreeken, Rob J; Hankemeier, Thomas
2015-06-16
The possible presence of matrix effect is one of the main concerns in liquid chromatography-mass spectrometry (LC-MS)-driven bioanalysis due to its impact on the reliability of the obtained quantitative results. Here we propose an approach to correct for the matrix effect in LC-MS with electrospray ionization using postcolumn infusion of eight internal standards (PCI-IS). We applied this approach to a generic ultraperformance liquid chromatography-time-of-flight (UHPLC-TOF) platform developed for small-molecule profiling with a main focus on drugs. Different urine samples were spiked with 19 drugs with different physicochemical properties and analyzed in order to study matrix effect (in absolute and relative terms). Furthermore, calibration curves for each analyte were constructed and quality control samples at different concentration levels were analyzed to check the applicability of this approach in quantitative analysis. The matrix effect profiles of the PCI-ISs were different: this confirms that the matrix effect is compound-dependent, and therefore the most suitable PCI-IS has to be chosen for each analyte. Chromatograms were reconstructed using analyte and PCI-IS responses, which were used to develop an optimized method which compensates for variation in ionization efficiency. The approach presented here improved the results in terms of matrix effect dramatically. Furthermore, calibration curves of higher quality are obtained, dynamic range is enhanced, and accuracy and precision of QC samples is increased. The use of PCI-ISs is a very promising step toward an analytical platform free of matrix effect, which can make LC-MS analysis even more successful, adding a higher reliability in quantification to its intrinsic high sensitivity and selectivity.
Use of Multiple Competitors for Quantification of Human Immunodeficiency Virus Type 1 RNA in Plasma
Vener, Tanya; Nygren, Malin; Andersson, AnnaLena; Uhlén, Mathias; Albert, Jan; Lundeberg, Joakim
1998-01-01
Quantification of human immunodeficiency virus type 1 (HIV-1) RNA in plasma has rapidly become an important tool in basic HIV research and in the clinical care of infected individuals. Here, a quantitative HIV assay based on competitive reverse transcription-PCR with multiple competitors was developed. Four RNA competitors containing identical PCR primer binding sequences as the viral HIV-1 RNA target were constructed. One of the PCR primers was fluorescently labeled, which facilitated discrimination between the viral RNA and competitor amplicons by fragment analysis with conventional automated sequencers. The coamplification of known amounts of the RNA competitors provided the means to establish internal calibration curves for the individual reactions resulting in exclusion of tube-to-tube variations. Calibration curves were created from the peak areas, which were proportional to the starting amount of each competitor. The fluorescence detection format was expanded to provide a dynamic range of more than 5 log units. This quantitative assay allowed for reproducible analysis of samples containing as few as 40 viral copies of HIV-1 RNA per reaction. The within- and between-run coefficients of variation were <24% (range, 10 to 24) and <36% (range, 27 to 36), respectively. The high reproducibility (standard deviation, <0.13 log) of the overall procedure for quantification of HIV-1 RNA in plasma, including sample preparation, amplification, and detection variations, allowed reliable detection of a 0.5-log change in RNA viral load. The assay could be a useful tool for monitoring HIV-1 disease progression and antiviral treatment and can easily be adapted to the quantification of other pathogens. PMID:9650926
Sukwattanasinit, Tasamaporn; Burana-Osot, Jankana; Sotanaphun, Uthai
2007-11-01
A simple, rapid and cost-saving method for the determination of total anthocyanins in roselle has been developed. The method was based on pH-differential spectrophotometry. The calibration curve of the major anthocyanin in roselle, delphinidin 3-sambubioside (Dp-3-sam), was constructed by using methyl orange and their correlation factor. The reliability of this developed method was comparable to the direct method using standard Dp-3-sam and the HPLC method. Quality characteristics of roselle produced in Thailand were also reported. Its physical quality met the required specifications. The overall chemical quality was herein surveyed for the first time and it was found to be the important parameter corresponded to the commercial grading of roselle. Total contents of anthocyanins and phenolics were proportional to the antiradical capacity.
NASA Astrophysics Data System (ADS)
Pratsenka, S. V.; Voropai, E. S.; Belkin, V. G.
2018-01-01
Rapid measurement of the moisture content of dehydrated residues is a critical problem, the solution of which will increase the efficiency of treatment facilities and optimize the process of applying flocculants. The ability to determine the moisture content of dehydrated residues using a meter operating on the IR reflectance principle was confirmed experimentally. The most suitable interference filters were selected based on an analysis of the obtained diffuse reflectance spectrum of the dehydrated residue in the range 1.0-2.7 μm. Calibration curves were constructed and compared for each filter set. A measuring filter with a transmittance maximum at 1.19 μm and a reference filter with a maximum at 1.3 μm gave the best agreement with the laboratory measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyun Jin; Han, Seungbong; Kim, Young Seok, E-mail: ysk@amc.seoul.kr
Purpose: A nomogram is a predictive statistical model that generates the continuous probability of a clinical event such as death or recurrence. The aim of the study was to construct a nomogram to predict 5-year overall survival after postoperative radiation therapy for stage IB to IIA cervical cancer. Methods and Materials: The clinical data from 1702 patients with early-stage cervical cancer, treated at 10 participating hospitals from 1990 to 2011, were reviewed to develop a prediction nomogram based on the Cox proportional hazards model. Demographic, clinical, and pathologic variables were included and analyzed to formulate the nomogram. The discrimination andmore » calibration power of the model was measured using a concordance index (c-index) and calibration curve. Results: The median follow-up period for surviving patients was 75.6 months, and the 5-year overall survival probability was 87.1%. The final model was constructed using the following variables: age, number of positive pelvic lymph nodes, parametrial invasion, lymphovascular invasion, and the use of concurrent chemotherapy. The nomogram predicted the 5-year overall survival with a c-index of 0.69, which was superior to the predictive power of the International Federation of Gynecology and Obstetrics (FIGO) staging system (c-index of 0.54). Conclusions: A survival-predicting nomogram that offers an accurate level of prediction and discrimination was developed based on a large multi-center study. The model may be more useful than the FIGO staging system for counseling individual patients regarding prognosis.« less
Al Alfy, Ibrahim Mohammad
2013-12-01
A set of ten radioactive well-logging calibration pads were constructed in one of the premises of the Nuclear Materials Authority (NMA), Egypt, at 6th October city. These pads were built for calibrating geophysical well-logging instruments. This calibration facility was conducted through technical assistance and practical support of the International Atomic Energy Agency (IAEA) and (ARCN). There are five uranium pads with three different uranium concentrations and borehole diameters. The other five calibration pads include one from each of the following: blank, potassium, thorium, multi layers and mixed. More than 22 t of various selected Egyptian raw materials were gathered for pad construction from different locations in Egypt. Pad's site and the surrounding area were spectrometrically surveyed before excavation for the construction process of pad-basin floor. They yielded negligible radiation values which are very near to the detected general background. After pad's construction, spectrometric measurements were carried out again in the same locations when the exposed bore holes of the pads were closed. No radioactivity leakage was noticed from the pads. Meanwhile, dose rate values were found to range from 0.12 to 1.26 mS/y. They were measured during the opening of bore holes of the pads. These values depend mainly upon the type and concentration of the pads as well as their borehole diameters. The results of radiospectrometric survey illustrate that the specification of top layers of the pads were constructed according to international standards. © 2013 Elsevier Ltd. All rights reserved.
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.
Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C
2008-07-21
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.
Wei, Qiuning; Wei, Yuan; Liu, Fangfang; Ding, Yalei
2015-10-01
To investigate the method for uncertainty evaluation of determination of tin and its compounds in the air of workplace by flame atomic absorption spectrometry. The national occupational health standards, GBZ/T160.28-2004 and JJF1059-1999, were used to build a mathematical model of determination of tin and its compounds in the air of workplace and to calculate the components of uncertainty. In determination of tin and its compounds in the air of workplace using flame atomic absorption spectrometry, the uncertainty for the concentration of the standard solution, atomic absorption spectrophotometer, sample digestion, parallel determination, least square fitting of the calibration curve, and sample collection was 0.436%, 0.13%, 1.07%, 1.65%, 3.05%, and 2.89%, respectively. The combined uncertainty was 9.3%.The concentration of tin in the test sample was 0.132 mg/m³, and the expanded uncertainty for the measurement was 0.012 mg/m³ (K=2). The dominant uncertainty for determination of tin and its compounds in the air of workplace comes from least squares fitting of the calibration curve and sample collection. Quality control should be improved in the process of calibration curve fitting and sample collection.
Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys
NASA Astrophysics Data System (ADS)
Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.
2016-08-01
Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.
Realization of the Gallium Triple Point at NMIJ/AIST
NASA Astrophysics Data System (ADS)
Nakano, T.; Tamura, O.; Sakurai, H.
2008-02-01
The triple point of gallium has been realized by a calorimetric method using capsule-type standard platinum resistance thermometers (CSPRTs) and a small glass cell containing about 97 mmol (6.8 g) of gallium with a nominal purity of 99.99999%. The melting curve shows a very flat and relatively linear dependence on 1/ F in the region from 1/ F = 1 to 1/ F = 20 with a narrow width of the melting curve within 0.1 mK. Also, a large gallium triple-point cell was fabricated for the calibration of client-owned CSPRTs. The gallium triple-point cell consists of a PTFE crucible and a PTFE cap with a re-entrant well and a small vent. The PTFE cell contains 780 g of gallium from the same source as used for the small glass cell. The PTFE cell is completely covered by a stainless-steel jacket with a valve to enable evacuation of the cell. The melting curve of the large cell shows a flat plateau that remains within 0.03 mK over 10 days and that is reproducible within 0.05 mK over 8 months. The calibrated value of a CSPRT obtained using the large cell agrees with that obtained using the small glass cell within the uncertainties of the calibrations.
The purpose of this SOP is to describe procedures for preparing calibration curve solutions used for gas chromatography/mass spectrometry (GC/MS) analysis of chlorpyrifos, diazinon, malathion, DDT, DDE, DDD, a-chlordane, and g-chlordane in dust, soil, air, and handwipe sample ext...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Herrick, A; Hoke, S
Purpose: A new readout technology based on pulsed optically stimulating luminescence is introduced (microSTARii, Landauer, Inc, Glenwood, IL60425). This investigation searches for approaches that maximizes the dosimetry accuracy in clinical applications. Methods: The sensitivity of each optically stimulated luminescence dosimeter (OSLD) was initially characterized by exposing it to a given radiation beam. After readout, the luminescence signal stored in the OSLD was erased by exposing its sensing area to a 21W white LED light for 24 hours. A set of OSLDs with consistent sensitivities was selected to calibrate the dose reader. Higher order nonlinear curves were also derived from themore » calibration readings. OSLDs with cumulative doses below 15 Gy were reused. Before an in-vivo dosimetry, the OSLD luminescence signal was erased with the white LED light. Results: For a set of 68 manufacturer-screened OSLDs, the measured sensitivities vary in a range of 17.3%. A sub-set of the OSLDs with sensitivities within ±1% was selected for the reader calibration. Three OSLDs in a group were exposed to a given radiation. Nine groups were exposed to radiation doses ranging from 0 to 13 Gy. Additional verifications demonstrated that the reader uncertainty is about 3%. With an external calibration function derived by fitting the OSLD readings to a 3rd-order polynomial, the dosimetry uncertainty dropped to 0.5%. The dose-luminescence response curves of individual OSLDs were characterized. All curves converge within 1% after the sensitivity correction. With all uncertainties considered, the systematic uncertainty is about 2%. Additional tests emulating in-vivo dosimetry by exposing the OSLDs under different radiation sources confirmed the claim. Conclusion: The sensitivity of individual OSLD should be characterized initially. A 3rd-order polynomial function is a more accurate representation of the dose-luminescence response curve. The dosimetry uncertainty specified by the manufacturer is 4%. Following the proposed approach, it can be controlled to 2%.« less
A comparison of the Injury Severity Score and the Trauma Mortality Prediction Model.
Cook, Alan; Weddle, Jo; Baker, Susan; Hosmer, David; Glance, Laurent; Friedman, Lee; Osler, Turner
2014-01-01
Performance benchmarking requires accurate measurement of injury severity. Despite its shortcomings, the Injury Severity Score (ISS) remains the industry standard 40 years after its creation. A new severity measure, the Trauma Mortality Prediction Model (TMPM), uses either the Abbreviated Injury Scale (AIS) or DRG International Classification of Diseases-9th Rev. (ICD-9) lexicons and may better quantify injury severity compared with ISS. We compared the performance of TMPM with ISS and other measures of injury severity in a single cohort of patients. We included 337,359 patient records with injuries reliably described in both the AIS and the ICD-9 lexicons from the National Trauma Data Bank. Five injury severity measures (ISS, maximum AIS score, New Injury Severity Score [NISS], ICD-9-Based Injury Severity Score [ICISS], TMPM) were computed using either the AIS or ICD-9 codes. These measures were compared for discrimination (area under the receiver operating characteristic curve), an estimate of proximity to a model that perfectly predicts the outcome (Akaike information criterion), and model calibration curves. TMPM demonstrated superior receiver operating characteristic curve, Akaike information criterion, and calibration using either the AIS or ICD-9 lexicons. Calibration plots demonstrate the monotonic characteristics of the TMPM models contrasted by the nonmonotonic features of the other prediction models. Severity measures were more accurate with the AIS lexicon rather than ICD-9. NISS proved superior to ISS in either lexicon. Since NISS is simpler to compute, it should replace ISS when a quick estimate of injury severity is required for AIS-coded injuries. Calibration curves suggest that the nonmonotonic nature of ISS may undermine its performance. TMPM demonstrated superior overall mortality prediction compared with all other models including ISS whether the AIS or ICD-9 lexicons were used. Because TMPM provides an absolute probability of death, it may allow clinicians to communicate more precisely with one another and with patients and families. Disagnostic study, level I; prognostic study, level II.
McJimpsey, Erica L
2016-02-25
The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.
NASA Astrophysics Data System (ADS)
McJimpsey, Erica L.
2016-02-01
The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
A Comparison of Radiometric Calibration Techniques for Lunar Impact Flashes
NASA Technical Reports Server (NTRS)
Suggs, R.
2016-01-01
Video observations of lunar impact flashes have been made by a number of researchers since the late 1990's and the problem of determination of the impact energies has been approached in different ways (Bellot Rubio, et al., 2000 [1], Bouley, et al., 2012.[2], Suggs, et al. 2014 [3], Rembold and Ryan 2015 [4], Ortiz, et al. 2015 [5]). The wide spectral response of the unfiltered video cameras in use for all published measurements necessitates color correction for the standard filter magnitudes available for the comparison stars. An estimate of the color of the impact flash is also needed to correct it to the chosen passband. Magnitudes corrected to standard filters are then used to determine the luminous energy in the filter passband according to the stellar atmosphere calibrations of Bessell et al., 1998 [6]. Figure 1 illustrates the problem. The camera pass band is the wide black curve and the blue, green, red, and magenta curves show the band passes of the Johnson-Cousins B, V, R, and I filters for which we have calibration star magnitudes. The blackbody curve of an impact flash of temperature 2800K (Nemtchinov, et al., 1998 [7]) is the dashed line. This paper compares the various photometric calibration techniques and how they address the color corrections necessary for the calculation of luminous energy (radiometry) of impact flashes. This issue has significant implications for determination of luminous efficiency, predictions of impact crater sizes for observed flashes, and the flux of meteoroids in the 10s of grams to kilogram size range.
Contaminant concentration in environmental samples using LIBS and CF-LIBS
NASA Astrophysics Data System (ADS)
Pandhija, S.; Rai, N. K.; Rai, A. K.; Thakur, S. N.
2010-01-01
The present paper deals with the detection and quantification of toxic heavy metals like Cd, Co, Pb, Zn, Cr, etc. in environmental samples by using the technique of laser-induced breakdown spectroscopy (LIBS) and calibration-free LIBS (CF-LIBS). A MATLABTM program has been developed based on the CF-LIBS algorithm given by earlier workers and concentrations of pollutants present in industrial area soil have been determined. LIBS spectra of a number of certified reference soil samples with varying concentrations of toxic elements (Cd, Zn) have been recorded to obtain calibration curves. The concentrations of Cd and Zn in soil samples from the Jajmau area, Kanpur (India) have been determined by using these calibration curves and also by the CF-LIBS approach. Our results clearly demonstrate that the combination of LIBS and CF-LIBS is very useful for the study of pollutants in the environment. Some of the results have also been found to be in good agreement with those of ICP-OES.
Caspar Creek ecology project: annual report, 1967-68
John W. DeWitt
1968-01-01
Two summers of calibration of the north and south fork Caspar Creek stream ecology study areas were completed in 1967. Clearing for logging road construction in the south fork watershed began in May, 1957. Bulldozer operations first reached the stream itself in July. Some calibration determinations were made during the period of road construction and stream clearance...
Calibrating Images from the MINERVA Cameras
NASA Astrophysics Data System (ADS)
Mercedes Colón, Ana
2016-01-01
The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.
NASA Astrophysics Data System (ADS)
Rahn, Helene; Alexiou, Christoph; Trahms, Lutz; Odenbach, Stefan
2014-06-01
X-ray computed tomography is nowadays used for a wide range of applications in medicine, science and technology. X-ray microcomputed tomography (XμCT) follows the same principles used for conventional medical CT scanners, but improves the spatial resolution to a few micrometers. We present an example of an application of X-ray microtomography, a study of 3-dimensional biodistribution, as along with the quantification of nanoparticle content in tumoral tissue after minimally invasive cancer therapy. One of these minimal invasive cancer treatments is magnetic drug targeting, where the magnetic nanoparticles are used as controllable drug carriers. The quantification is based on a calibration of the XμCT-equipment. The developed calibration procedure of the X-ray-μCT-equipment is based on a phantom system which allows the discrimination between the various gray values of the data set. These phantoms consist of a biological tissue substitute and magnetic nanoparticles. The phantoms have been studied with XμCT and have been examined magnetically. The obtained gray values and nanoparticle concentration lead to a calibration curve. This curve can be applied to tomographic data sets. Accordingly, this calibration enables a voxel-wise assignment of gray values in the digital tomographic data set to nanoparticle content. Thus, the calibration procedure enables a 3-dimensional study of nanoparticle distribution as well as concentration.
Bayesian inference of Calibration curves: application to archaeomagnetism
NASA Astrophysics Data System (ADS)
Lanos, P.
2003-04-01
The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.
Fach, S; Sitzenfrei, R; Rauch, W
2009-01-01
It is state of the art to evaluate and optimise sewer systems with urban drainage models. Since spill flow data is essential in the calibration process of conceptual models it is important to enhance the quality of such data. A wide spread approach is to calculate the spill flow volume by using standard weir equations together with measured water levels. However, these equations are only applicable to combined sewer overflow (CSO) structures, whose weir constructions correspond with the standard weir layout. The objective of this work is to outline an alternative approach to obtain spill flow discharge data based on measurements with a sonic depth finder. The idea is to determine the relation between water level and rate of spill flow by running a detailed 3D computational fluid dynamics (CFD) model. Two real world CSO structures have been chosen due to their complex structure, especially with respect to the weir construction. In a first step the simulation results were analysed to identify flow conditions for discrete steady states. It will be shown that the flow conditions in the CSO structure change after the spill flow pipe acts as a controlled outflow and therefore the spill flow discharge cannot be described with a standard weir equation. In a second step the CFD results will be used to derive rating curves which can be easily applied in everyday practice. Therefore the rating curves are developed on basis of the standard weir equation and the equation for orifice-type outlets. Because the intersection of both equations is not known, the coefficients of discharge are regressed from CFD simulation results. Furthermore, the regression of the CFD simulation results are compared with the one of the standard weir equation by using historic water levels and hydrographs generated with a hydrodynamic model. The uncertainties resulting of the wide spread use of the standard weir equation are demonstrated.
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Odegård, M; Mansfeld, J; Dundas, S H
2001-08-01
Calibration materials for microanalysis of Ti minerals have been prepared by direct fusion of synthetic and natural materials by resistance heating in high-purity graphite electrodes. Synthetic materials were FeTiO3 and TiO2 reagents doped with minor and trace elements; CRMs for ilmenite, rutile, and a Ti-rich magnetite were used as natural materials. Problems occurred during fusion of Fe2O3-rich materials, because at atmospheric pressure Fe2O3 decomposes into Fe3O4 and O2 at 1462 degrees C. An alternative fusion technique under pressure was tested, but the resulting materials were characterized by extensive segregation and development of separate phases. Fe2O3-rich materials were therefore fused below this temperature, resulting in a form of sintering, without conversion of the materials into amorphous glasses. The fused materials were studied by optical microscopy and EPMA, and tested as calibration materials by inductively coupled plasma mass spectrometry, equipped with laser ablation for sample introduction (LA-ICP-MS). It was demonstrated that calibration curves based on materials of rutile composition, within normal analytical uncertainty, generally coincide with calibration curves based on materials of ilmenite composition. It is, therefore, concluded that LA-ICP-MS analysis of Ti minerals can with advantage be based exclusively on calibration materials prepared for rutile, thereby avoiding the special fusion problems related to oxide mixtures of ilmenite composition. It is documented that sintered materials were in good overall agreement with homogeneous glass materials, an observation that indicates that in other situations also sintered mineral concentrates might be a useful alternative for instrument calibration, e.g. as alternative to pressed powders.
On the absolute calibration of SO2 cameras
Lübcke, Peter; Bobrowski, Nicole; Illing, Sebastian; Kern, Christoph; Alvarez Nieves, Jose Manuel; Vogel, Leif; Zielcke, Johannes; Delgados Granados, Hugo; Platt, Ulrich
2013-01-01
This work investigates the uncertainty of results gained through the two commonly used, but quite different, calibration methods (DOAS and calibration cells). Measurements with three different instruments, an SO2 camera, a NFOVDOAS system and an Imaging DOAS (I-DOAS), are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The respective results are compared with measurements from an I-DOAS to verify the calibration curve over the spatial extent of the image. The results show that calibration cells, while working fine in some cases, can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. The measurements presented in this work were taken at Popocatepetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 and 14.34 kg s−1 were observed.
Data user's notes of the radio astronomy experiment aboard the OGO-V spacecraft
NASA Technical Reports Server (NTRS)
Haddock, F. T.; Breckenridge, S. L.
1970-01-01
General information concerning the low-frequency radiometer, instrument package launching and operation, and scientific objectives of the flight are provided. Calibration curves and correction factors, with general and detailed information on the preflight calibration procedure are included. The data acquisition methods and the format of the data reduction, both on 35 mm film and on incremental computer plots, are described.
VOC identification and inter-comparison from laboratory biomass burning using PTR-MS and PIT-MS
C. Warneke; J. M. Roberts; P. Veres; J. Gilman; W. C. Kuster; I. Burling; R. Yokelson; J. A. de Gouw
2011-01-01
Volatile organic compounds (VOCs) emitted from fires of biomass commonly found in the southeast and southwest U.S. were investigated with PTR-MS and PIT-MS, which are capable of fast measurements of a large number of VOCs. Both instruments were calibrated with gas standards and mass dependent calibration curves are determined. The sensitivity of the PIT-MS linearly...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnell, E; Ferreira, C; Ahmad, S
Purpose: Accuracy of a RSP-HU calibration curve produced for proton treatment planning is tested by comparing the treatment planning system dose grid to physical doses delivered on film by a Mevion S250 double-scattering proton unit. Methods: A single batch of EBT3 Gafchromic film was used for calibration and measurements. The film calibration curve was obtained using Mevion proton beam reference option 20 (15cm range, 10cm modulation). Paired films were positioned at the center of the spread out Bragg peak (SOBP) in solid water. The calibration doses were verified with an ion chamber, including background and doses from 20cGy to 350cGy.more » Films were scanned in a flatbed Epson-Expression 10000-XL scanner, and analyzed using the red channel. A Rando phantom was scanned with a GE LightSpeed CT Simulator. A single-field proton plan (Eclipse, Varian) was calculated to deliver 171cGy to the pelvis section (heterogeneous region), using a standard 4×4cm aperture without compensator, 7.89cm beam range, and 5.36cm SOBP. Varied depths of the calculated distal 90% isodose-line were recorded and compared. The dose distribution from film irradiated between Rando slices was compared with the calculated plans using RIT v.6.2. Results: Distal 90% isodose-line depth variation between CT scans was 2mm on average, and 4mm at maximum. Fine calculation of this variation was restricted by the dose calculation grid, as well as the slice thickness. Dose differences between calibrated film measurements and calculated doses were on average 5.93cGy (3.5%), with the large majority of differences forming a normal distribution around 3.5cGy (2%). Calculated doses were almost entirely greater than those measured. Conclusion: RSP to HU calibration curve is shown to produce distal depth variation within the margin of tolerance (±4.3mm) across all potential scan energies and protocols. Dose distribution calculation is accurate to 2–4% within the SOBP, including areas of high tissue heterogeneity.« less
2. TYPICAL OVERHEAD WIRE CONSTRUCTION CURVE GUY WIRE ARRANGEMENT ...
2. TYPICAL OVERHEAD WIRE CONSTRUCTION - CURVE GUY WIRE ARRANGEMENT (ABANDONED WEST LEG OF WYE AT SIXTH AVENUE AND PINE STREET) - Yakima Valley Transportation Company Interurban Railroad, Trackage, Yakima, Yakima County, WA
Mello, Vinicius M; Oliveira, Flavia C C; Fraga, William G; do Nascimento, Claudia J; Suarez, Paulo A Z
2008-11-01
Three different calibration curves based on (1)H-NMR spectroscopy (300 MHz) were used for quantifying the reaction yield during biodiesel synthesis by esterification of fatty acids mixtures and methanol. For this purpose, the integrated intensities of the hydrogens of the ester methoxy group (3.67 ppm) were correlated with the areas related to the various protons of the alkyl chain (olefinic hydrogens: 5.30-5.46 ppm; aliphatic: 2.67-2.78 ppm, 2.30 ppm, 1.96-2.12 ppm, 1.56-1.68 ppm, 1.22-1.42 ppm, 0.98 ppm, and 0.84-0.92 ppm). The first curve was obtained using the peaks relating the olefinic hydrogens, a second with the parafinic protons and the third curve using the integrated intensities of all the hydrogens. A total of 35 samples were examined: 25 samples to build the three different calibration curves and ten samples to serve as external validation samples. The results showed no statistical differences among the three methods, and all presented prediction errors less than 2.45% with a co-efficient of variation (CV) of 4.66%. 2008 John Wiley & Sons, Ltd.
A calibration method for patient specific IMRT QA using a single therapy verification film
Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.
2013-01-01
Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly accurate IMRT verification. PMID:24416558
Avella, Joseph; Lehrer, Michael; Zito, S William
2008-10-01
1,1-Difluoroethane (DFE), also known as Freon 152A, is a member of a class of compounds known as halogenated hydrocarbons. A number of these compounds have gained notoriety because of their ability to induce rapid onset of intoxication after inhalation exposure. Abuse of DFE has necessitated development of methods for its detection and quantitation in postmortem and human performance specimens. Furthermore, methodologies applicable to research studies are required as there have been limited toxicokinetic and toxicodynamic reports published on DFE. This paper describes a method for the quantitation of DFE using a gas chromatography-flame-ionization headspace technique that employs solventless standards for calibration. Two calibration curves using 0.5 mL whole blood calibrators which ranged from A: 0.225-1.350 to B: 9.0-180.0 mg/L were developed. These were evaluated for linearity (0.9992 and 0.9995), limit of detection of 0.018 mg/L, limit of quantitation of 0.099 mg/L (recovery 111.9%, CV 9.92%), and upper limit of linearity of 27,000.0 mg/L. Combined curve recovery results of a 98.0 mg/L DFE control that was prepared using an alternate technique was 102.2% with CV of 3.09%. No matrix interference was observed in DFE enriched blood, urine or brain specimens nor did analysis of variance detect any significant differences (alpha = 0.01) in the area under the curve of blood, urine or brain specimens at three identical DFE concentrations. The method is suitable for use in forensic laboratories because validation was performed on instrumentation routinely used in forensic labs and due to the ease with which the calibration range can be adjusted. Perhaps more importantly it is also useful for research oriented studies because the removal of solvent from standard preparation eliminates the possibility for solvent induced changes to the gas/liquid partitioning of DFE or chromatographic interference due to the presence of solvent in specimens.
Honeybul, Stephen; Ho, Kwok M
2016-09-01
Predicting long-term neurological outcomes after severe traumatic brain (TBI) is important, but which prognostic model in the context of decompressive craniectomy has the best performance remains uncertain. This prospective observational cohort study included all patients who had severe TBI requiring decompressive craniectomy between 2004 and 2014, in the two neurosurgical centres in Perth, Western Australia. Severe disability, vegetative state, or death were defined as unfavourable neurological outcomes. Area under the receiver-operating-characteristic curve (AUROC) and slope and intercept of the calibration curve were used to assess discrimination and calibration of the CRASH (Corticosteroid-Randomisation-After-Significant-Head injury) and IMPACT (International-Mission-For-Prognosis-And-Clinical-Trial) models, respectively. Of the 319 patients included in the study, 119 (37%) had unfavourable neurological outcomes at 18-month after decompressive craniectomy for severe TBI. Both CRASH (AUROC 0.86, 95% confidence interval 0.81-0.90) and IMPACT full-model (AUROC 0.85, 95% CI 0.80-0.89) were similar in discriminating between favourable and unfavourable neurological outcome at 18-month after surgery (p=0.690 for the difference in AUROC derived from the two models). Although both models tended to over-predict the risks of long-term unfavourable outcome, the IMPACT model had a slightly better calibration than the CRASH model (intercept of the calibration curve=-4.1 vs. -5.7, and log likelihoods -159 vs. -360, respectively), especially when the predicted risks of unfavourable outcome were <80%. Both CRASH and IMPACT prognostic models were good in discriminating between favourable and unfavourable long-term neurological outcome for patients with severe TBI requiring decompressive craniectomy, but the calibration of the IMPACT full-model was better than the CRASH model. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
SU-E-T-223: Computed Radiography Dose Measurements of External Radiotherapy Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aberle, C; Kapsch, R
2015-06-15
Purpose: To obtain quantitative, two-dimensional dose measurements of external radiotherapy beams with a computed radiography (CR) system and to derive volume correction factors for ionization chambers in small fields. Methods: A commercial Kodak ACR2000i CR system with Kodak Flexible Phosphor Screen HR storage foils was used. Suitable measurement conditions and procedures were established. Several corrections were derived, including image fading, length-scale corrections and long-term stability corrections. Dose calibration curves were obtained for cobalt, 4 MV, 8 MV and 25 MV photons, and for 10 MeV, 15 MeV and 18 MeV electrons in a water phantom. Inherent measurement inhomogeneities were studiedmore » as well as directional dependence of the response. Finally, 2D scans with ionization chambers were directly compared to CR measurements, and volume correction factors were derived. Results: Dose calibration curves (0.01 Gy to 7 Gy) were obtained for multiple photon and electron beam qualities. For each beam quality, the calibration curves can be described by a single fit equation over the whole dose range. The energy dependence of the dose response was determined. The length scale on the images was adjusted scan-by-scan, typically by 2 percent horizontally and by 3 percent vertically. The remaining inhomogeneities after the system’s standard calibration procedure were corrected for. After correction, the homogeneity is on the order of a few percent. The storage foils can be rotated by up to 30 degrees without a significant effect on the measured signal. First results on the determination of volume correction factors were obtained. Conclusion: With CR, quantitative, two-dimensional dose measurements with a high spatial resolution (sub-mm) can be obtained over a large dose range. In order to make use of these advantages, several calibrations, corrections and supporting measurements are needed. This work was funded by the European Metrology Research Programme (EMRP) project HLT09 MetrExtRT Metrology for Radiotherapy using Complex Radiation Fields.« less
He, Y J; Li, X T; Fan, Z Q; Li, Y L; Cao, K; Sun, Y S; Ouyang, T
2018-01-23
Objective: To construct a dynamic enhanced MR based predictive model for early assessing pathological complete response (pCR) to neoadjuvant therapy in breast cancer, and to evaluate the clinical benefit of the model by using decision curve. Methods: From December 2005 to December 2007, 170 patients with breast cancer treated with neoadjuvant therapy were identified and their MR images before neoadjuvant therapy and at the end of the first cycle of neoadjuvant therapy were collected. Logistic regression model was used to detect independent factors for predicting pCR and construct the predictive model accordingly, then receiver operating characteristic (ROC) curve and decision curve were used to evaluate the predictive model. Results: ΔArea(max) and Δslope(max) were independent predictive factors for pCR, OR =0.942 (95% CI : 0.918-0.967) and 0.961 (95% CI : 0.940-0.987), respectively. The area under ROC curve (AUC) for the constructed model was 0.886 (95% CI : 0.820-0.951). Decision curve showed that in the range of the threshold probability above 0.4, the predictive model presented increased net benefit as the threshold probability increased. Conclusions: The constructed predictive model for pCR is of potential clinical value, with an AUC>0.85. Meanwhile, decision curve analysis indicates the constructed predictive model has net benefit from 3 to 8 percent in the likely range of probability threshold from 80% to 90%.
NASA Technical Reports Server (NTRS)
Green, Robert O.; Conel, James E.; Vandenbosch, Jeannette; Shimada, Masanobu
1993-01-01
We describe an experiment to calibrate the optical sensor (OPS) on board the Japanese Earth Resources Satellite-1 with data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). On 27 Aug. 1992 both the OPS and AVIRIS acquired data concurrently over a calibration target on the surface of Rogers Dry Lake, California. The high spectral resolution measurements of AVIRIS have been convolved to the spectral response curves of the OPS. These data in conjunction with the corresponding OPS digitized numbers have been used to generate the radiometric calibration coefficients for the eight OPS bands. This experiment establishes the suitability of AVIRIS for the calibration of spaceborne sensors in the 400 to 2500 nm spectral region.
Mehl, S.; Hill, M.C.
2001-01-01
Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.
NASA Astrophysics Data System (ADS)
Pool, Sandra; Viviroli, Daniel; Seibert, Jan
2017-11-01
Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve simulations close to a well-informed model calibration. The differences among such strategies covering the full range of runoff magnitudes were small indicating that the exact choice of a strategy might be less crucial. Our study corroborates the information value of a small number of strategically selected runoff measurements for simulating runoff with a bucket-type runoff model in almost ungauged catchments.
Calibration-free optical chemical sensors
DeGrandpre, Michael D.
2006-04-11
An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.
Flow-injection assay of catalase activity.
Ukeda, Hiroyuki; Adachi, Yukiko; Sawamura, Masayoshi
2004-03-01
A novel flow-injection assay (FIA) system with a double line for catalase activity was constructed in which an oxidase is immobilized and the substrate is continuously pumped to reduce the dissolved oxygen and to generate a given level of hydrogen peroxide. The catalase in a sample decomposed the hydrogen peroxide, and thus the increase in dissolved oxygen dependent on the activity was amperometrically monitored using a Clark-type oxygen electrode. Among the examined several oxidases, uricase was most suitable for the continuous formation of hydrogen peroxide from a consideration of the stability and the conversion efficiency. Under the optimum conditions, a linear calibration curve was obtained in the range from 21 to 210 units/mg and the reproducibility (CV) was better than 2% by 35 successive determinations of 210 units/ml catalase preparation. The sampling frequency was about 15 samples/h. The present FIA system was applicable to monitor the inactivation of catalase by glycation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Adriana S.M.; Gual, Maritza R.; Faria, Luiz O.
Poly(vinylidene fluoride) homopolymers [PVDF] homopolymers were irradiated with gamma doses ranging from 0.5 to 2.75 MGy. Differential scanning calorimetry (DSC) and FTIR spectrometry were used in order to study the effects of gamma radiation in the amorphous and crystalline polymer structures. The FTIR data revealed absorption bands at 1730 and 1853 cm{sup -1} which were attributed to the stretch of C=O bonds, at 1715 and 1754 cm{sup -1} which were attributed to the C=C stretching and at 3518, 3585 and 3673 cm{sup -1} which were associated with NH stretch of NH{sub 2} and OH. The melting latent heat (LM) measuredmore » by DSC was used to construct an unambiguous relationship with the delivered dose. Regression analyses revealed that the best mathematical function that fits the experimental calibration curve is a 4-degree polynomial function, with an adjusted Rsquare of 0.99817. (authors)« less
Ahn, Sung Hee; Bae, Yong Jin; Moon, Jeong Hee; Kim, Myung Soo
2013-09-17
We propose to divide matrix suppression in matrix-assisted laser desorption ionization into two parts, normal and anomalous. In quantification of peptides, the normal effect can be accounted for by constructing the calibration curve in the form of peptide-to-matrix ion abundance ratio versus concentration. The anomalous effect forbids reliable quantification and is noticeable when matrix suppression is larger than 70%. With this 70% rule, matrix suppression becomes a guideline for reliable quantification, rather than a nuisance. A peptide in a complex mixture can be quantified even in the presence of large amounts of contaminants, as long as matrix suppression is below 70%. The theoretical basis for the quantification method using a peptide as an internal standard is presented together with its weaknesses. A systematic method to improve quantification of high concentration analytes has also been developed.
Wan, Haibao; Umstot, Edward S; Szeto, Hazel H; Schiller, Peter W; Desiderio, Dominic M
2004-04-15
The synthetic opioid peptide analog Dmt-D-Arg-Phe-Lys-NH(2) ([Dmt(1)]DALDA; [Dmt= 2',6'-dimethyltyrosine) is a highly potent and selective mu opioid-receptor agonist. A very sensitive and robust capillary liquid chromatography/nanospray ion-trap (IT) mass spectrometry method has been developed to quantify [Dmt(1)]DALDA in ovine plasma, using deuterated [Dmt(1)]DALDA as the internal standard. The standard MS/MS spectra of d(0)- and d(5)-[Dmt(1)]DALDA were obtained, and the collision energy was experimentally optimized to 25%. The product ion [ M + 2H-NH(3)](2+) (m/z 312.2) was used to identify and to quantify the synthetic opioid peptide analog in ovine plasma samples. The MS/MS detection sensitivity for [Dmt(1)]DALDA was 625 amol. A calibration curve was constructed, and quantitative analysis was performed on a series of ovine plasma samples.
NASA Astrophysics Data System (ADS)
Munoz, Joshua
The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheelmounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to highfidelity distance and curvature data through utilization of processor clock rate and left-and rightrail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a "Hy-Rail" utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing highaccuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged.
NASA Astrophysics Data System (ADS)
Smith, V.; Mark, D.; Blockley, S.; Weh, A.
2010-12-01
Evolved melts that fuel large explosive eruptions encounter, and are often generated through melting, crystal-rich parts of the magmatic system that fed previous eruptions. This results in many antecrysts being incorporated into the magma prior to eruption. In addition, many xenocrysts are entrained during eruption through conduit excavation. Combining all these crystal populations produces 40Ar/39Ar dates with wide-ranges, such as those that are often reported in the literature. In order to gain very precise dates of volcanic events it is thus necessary to assess whether antecrysts and xenocrysts effect the precision of the dates, and establish ways to reduce these components. Here we use the deposits of the ~11 ka Ulleung-Oki eruption from the alkaline volcanic island of Ulleung, situated 130 km east of the Korean peninsula. The eruption deposits are widely dispersed and found in the Suigetsu lake sequence from central Japan. A precise date of the tephra would help with construction of the terrestrial radiocarbon calibration curve that spans back to the limit of radiocarbon dating (~50 ka). The new calibration model is currently being constructed using varve chronology (annual layer counting) and >600 14C determinations of terrestrial macrofossils*. However, the annual layers stop shortly after the 2 cm-thick Ulleung-Oki tephra. Precise dates of this volcanic event using a method that is independent of radiocarbon dating, would help validate the chronology of the core, and test the validity of the radiocarbon calibration curve. The tephra in the core has been correlated to proximal deposits using major and trace element composition (determined using an electron microprobe and LA-ICPMS) of the glass shards that comprise the distal ash. The proximal Ulleung-Oki eruption deposits are sandine-rich with crystals that range from ~80 microns to a few millimetres in size. These are likely to be a mixture of phenocrysts, antecrysts and xenocrysts. In order to get a very precise age on a relatively young eruption (~11 ka) we carried out >70 40Ar/39Ar dates of crystals. The sanidines were extracted from individual large pumices that were fragmented using selFrag so that the crystals remained intact. The crystals were then split into different size ranges prior to analysis on a high-sensitivity multicollector noble gas mass spectrometer (ARGUS). This approach allows us to assess how the incorporation of antecrysts and xenocrysts effect 40Ar/39Ar dates. Here we present the age ranges and discuss the results. *Research being carried out by members of the NERC funded Suigetsu 2006 Project led by Takeshi Nakagawa, Newcastle University, UK (http://www.suigetsu.org/)
Kumar, Abhinav; Gangadharan, Bevin; Cobbold, Jeremy; Thursz, Mark; Zitzmann, Nicole
2017-09-21
LC-MS and immunoassay can detect protein biomarkers. Immunoassays are more commonly used but can potentially be outperformed by LC-MS. These techniques have limitations including the necessity to generate separate calibration curves for each biomarker. We present a rapid mass spectrometry-based assay utilising a universal calibration curve. For the first time we analyse clinical samples using the HeavyPeptide IGNIS kit which establishes a 6-point calibration curve and determines the biomarker concentration in a single LC-MS acquisition. IGNIS was tested using apolipoprotein F (APO-F), a potential biomarker for non-alcoholic fatty liver disease (NAFLD). Human serum and IGNIS prime peptides were digested and the IGNIS assay was used to quantify APO-F in clinical samples. Digestion of IGNIS prime peptides was optimised using trypsin and SMART Digest™. IGNIS was 9 times faster than the conventional LC-MS method for determining the concentration of APO-F in serum. APO-F decreased across NAFLD stages. Inter/intra-day variation and stability post sample preparation for one of the peptides was ≤13% coefficient of variation (CV). SMART Digest™ enabled complete digestion in 30 minutes compared to 24 hours using in-solution trypsin digestion. We have optimised the IGNIS kit to quantify APO-F as a NAFLD biomarker in serum using a single LC-MS acquisition.
Mocho, Pierre; Desauziers, Valérie
2011-05-01
Solid-phase microextraction (SPME) is a powerful technique, easy to implement for on-site static sampling of indoor VOCs emitted by building materials. However, a major constraint lies in the establishment of calibration curves which requires complex generation of standard atmospheres. Thus, the purpose of this paper is to propose a model to predict adsorption kinetics (i.e., calibration curves) of four model VOCs. The model is based on Fick's laws for the gas phase and on the equilibrium or the solid diffusion model for the adsorptive phase. Two samplers (the FLEC® and a home-made cylindrical emission cell), coupled to SPME for static sampling of material emissions, were studied. A good agreement between modeling and experimental data is observed and results show the influence of sampling rate on mass transfer mode in function of sample volume. The equilibrium model is adapted to quite large volume sampler (cylindrical cell) while the solid diffusion model is dedicated to small volume sampler (FLEC®). The limiting steps of mass transfer are the diffusion in gas phase for the cylindrical cell and the pore surface diffusion for the FLEC®. In the future, this modeling approach could be a useful tool for time-saving development of SPME to study building material emission in static mode sampling.
[Spectrometric assessment of thyroid depth within the radioiodine test].
Rink, T; Bormuth, F-J; Schroth, H-J; Braun, S; Zimny, M
2005-01-01
Aim of this study is the validation of a simple method for evaluating the depth of the target volume within the radioiodine test by analyzing the emitted iodine-131 energy spectrum. In a total of 250 patients (102 with a solitary autonomous nodule, 66 with multifocal autonomy, 29 with disseminated autonomy, 46 with Graves' disease, 6 for reducing goiter volume and 1 with only partly resectable papillary thyroid carcinoma), simultaneous uptake measurements in the Compton scatter (210 +/- 110 keV) and photopeak (364-45/+55 keV) windows were performed over one minute 24 hours after application of the 3 MBq test dose, with subsequent calculation of the respective count ratios. Measurements with a water-filled plastic neck phantom were carried out to perceive the relationship between these quotients and the average source depth and to get a calibration curve for calculating the depth of the target volume in the 250 patients for comparison with the sonographic reference data. Another calibration curve was obtained by evaluating the results of 125 randomly selected patient measurements to calculate the source depth in the other half of the group. The phantom measurements revealed a highly significant correlation (r = 0,99) between the count ratios and the source depth. Using these calibration data, a good relationship (r = 0,81, average deviation 6 mm corresponding to 22%) between the spectrometric and the sonographic depths was obtained. When using the calibration curve resulting from the 125 patient measurements, the overage deviation in the other half of the group was only 3 mm (12%). There was no difference between the disease groups. The described method allows on easy to use depth correction of the uptake measurements providing good results.
NASA Astrophysics Data System (ADS)
Uno, Yuko; Ogawa, Emiyu; Aiyoshi, Eitaro; Arai, Tsunenori
2018-02-01
We constructed the 3-compartment talaporfin sodium pharmacokinetic model for canine by an optimization using the fluorescence measurement data from canine skin to estimate the concentration in the interstitial space. It is difficult to construct the 3-compartment model consisted of plasma, interstitial space, and cell because there is a lack of the dynamic information. Therefore, we proposed the methodology to construct the 3-compartment model using the measured talaporfin sodium skin fluorescence change considering originated tissue part by a histological observation. In a canine animal experiment, the talaporfin sodium concentration time history in plasma was measured by a spectrophotometer with a prepared calibration curve. The time history of talaporfin sodium Q-band fluorescence on left femoral skin of a beagle dog excited by talaporfin sodium Soret-band of 409 nm was measured in vivo by our previously constructed measurement system. The measured skin fluorescence was classified to its source, that is, specific ratio of plasma, interstitial space, and cell. We represented differential rate equations of the talaporfin sodium concentration in plasma, interstitial space, cell. The specific ratios and a converting constant to obtain absolute value of skin concentration were arranged. Minimizing the squared error of the difference between the measured fluorescence data and calculated concentration by the conjugate gradient method in MATLAB, the rate constants in the 3-compartment model were determined. The accuracy of the fitting operation was confirmed with determination coefficient of 0.98. We could construct the 3-compartment pharmacokinetic model for canine using the measured talaporfin sodium fluorescence change from canine skin.
Fracture resistance of a TiB2 particle/SiC matrix composite at elevated temperature
NASA Technical Reports Server (NTRS)
Jenkins, Michael G.; Salem, Jonathan A.; Seshadri, Srinivasa G.
1988-01-01
The fracture resistance of a comercial TiB2 particle/SiC matrix composite was evaluated at temperatures ranging from 20 to 1400 C. A laser interferometric strain gauge (LISG) was used to continuously monitor the crack mouth opening displacement (CMOD) of the chevron-notched and straight-notched, three-point bend specimens used. Crack growth resistance curves (R-curves) were determined from the load versus displacement curves and displacement calibrations. Fracture toughness, work-of-fracture, and R-curve levels were found to decrease with increasing temperature. Microstructure, fracture surface, and oxidation coat were examined to explain the fracture behavior.
Fracture resistance of a TiB2 particle/SiC matrix composite at elevated temperature
NASA Technical Reports Server (NTRS)
Jenkins, Michael G.; Salem, Jonathan A.; Seshadri, Srinivasa G.
1989-01-01
The fracture resistance of a commercial TiB2 particle/SiC matrix composite was evaluated at temperatures ranging from 20 to 1400 C. A laser interferometric strain gauge (LiSG) was used to continuously monitor the crack mouth opening displacement (CMOD) of the chevron-notched and straight-notched, three-point bend specimens used. Crack growth resistance curves (R-curves) were determined from the load versus displacement curves and displacement calibrations. Fracture toughness, work-of-fracture, and R-curve levels were found to decrease with increasing temperature. Microstructure, fracture surface, and oxidation coat were examined to explain the fracture behavior.
Construction of a Cr3C2-C Peritectic Point Cell for Thermocouple Calibration
NASA Astrophysics Data System (ADS)
Ogura, Hideki; Deuze, Thierry; Morice, Ronan; Ridoux, Pascal; Filtz, Jean-Remy
The melting points of Cr3C2-C peritectic (1826°C) and Cr7C3-Cr3C2 eutectic (1742°C) alloys as materials for high-temperature fixed point cells are investigated for the use of thermocouple calibration. Pretests are performed to establish a suitable procedure for constructing contact thermometry cells based on such chromium-carbon mixtures. Two cells are constructed following two different possible procedures. The above two melting points are successfully observed for one of these cells using tungsten-rhenium alloy thermocouples.
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.
2012-04-01
The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.
Measurement of large steel plates based on linear scan structured light scanning
NASA Astrophysics Data System (ADS)
Xiao, Zhitao; Li, Yaru; Lei, Geng; Xi, Jiangtao
2018-01-01
A measuring method based on linear structured light scanning is proposed to achieve the accurate measurement of the complex internal shape of large steel plates. Firstly, by using a calibration plate with round marks, an improved line scanning calibration method is designed. The internal and external parameters of camera are determined through the calibration method. Secondly, the images of steel plates are acquired by line scan camera. Then the Canny edge detection method is used to extract approximate contours of the steel plate images, the Gauss fitting algorithm is used to extract the sub-pixel edges of the steel plate contours. Thirdly, for the problem of inaccurate restoration of contour size, by measuring the distance between adjacent points in the grid of known dimensions, the horizontal and vertical error curves of the images are obtained. Finally, these horizontal and vertical error curves can be used to correct the contours of steel plates, and then combined with the calibration parameters of internal and external, the size of these contours can be calculated. The experiments results demonstrate that the proposed method can achieve the error of 1 mm/m in 1.2m×2.6m field of view, which has satisfied the demands of industrial measurement.
McJimpsey, Erica L.
2016-01-01
The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves. PMID:26911983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, S; Kim, K; Jung, H
Purpose: The small animal irradiator has been used with small animals to optimize new radiation therapy as preclinical studies. The small animal was irradiated by whole- or partial-body exposure. In this study, the dosimetric characterizations of small animal irradiator were carried out in small field using Radiochromic films Material & Methods: The study was performed in commercial animal irradiator (XRAD-320, Precision x-ray Inc, North Brantford) with Radiochromic films (EBT2, Ashland Inc, Covington). The calibration curve was generated between delivery dose and optical density (red channel) and the films were scanned by and Epson 1000XL scanner (Epson America Inc., Long Beach,more » CA).We evaluated dosimetric characterization of irradiator using various filter supported by manufacturer in 260 kV. The various filters were F1 (2.0mm Aluminum (HVL = about 1.0mm Cu) and F2 (0.75mm Tin + 0.25mm Copper + 1.5mm Aluminum (HVL = about 3.7mm Cu). According to collimator size (3, 5, 7, 10 mm, we calculated percentage depth dose (PDD) and the surface –source distance(SSD) was 17.3 cm considering dose rate. Results: The films were irradiated in 260 kV, 10mA and we increased exposure time 5sec. intervals from 5sec. to 120sec. The calibration curve of films was fitted with cubic function. The correlation between optical density and dose was Y=0.1405 X{sup 3}−2.916 X{sup 2}+25.566 x+2.238 (R{sup 2}=0.994). Based on the calibration curve, we calculated PDD in various filters depending on collimator size. When compared PDD of specific depth (3mm) considering animal size, the difference by collimator size was 4.50% in free filter and F1 was 1.53% and F2 was within 2.17%. Conclusion: We calculated PDD curve in small animal irradiator depending on the collimator size and the kind of filter using the radiochromic films. The various PDD curve was acquired and it was possible to irradiate various dose using these curve.« less
WE-D-17A-06: Optically Stimulated Luminescence Detectors as ‘LET-Meters’ in Proton Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, D; Sahoo, N; Sawakuchi, GO
Purpose: To demonstrate and evaluate the potential of optically stimulated luminescence (OSL) detectors (OSLDs) for measurements of linear energy transfer (LET) in therapeutic proton beams. Methods: Batches of Al2O2:C OSLDs were irradiated with an absorbed dose of 0.2 Gy in un-modulated proton beams of varying LET (0.67 keV/μm to 2.58 keV/μm). The OSLDs were read using continuous wave (CW-OSL) and pulsed (P-OSL) stimulation modes. We parameterized and calibrated three characteristics of the OSL signals as functions of LET: CW-OSL curve shape, P-OSL curve shape and the ratio of the two OSL emission band intensities (ultraviolet/blue ratio). Calibration curves were createdmore » for each of these characteristics to describe their behaviors as functions of LET. The true LET values were determined using a validated Monte Carlo model of the proton therapy nozzle used to irradiate the OSLDs. We then irradiated batches of OSLDs with an absorbed dose of 0.2 Gy at various depths in two modulated proton beams (140 MeV, 4 cm wide spread-out Bragg peak (SOBP) and 250 MeV, 10 cm wide SOBP). The LET values were calculated using the OSL response and the calibration curves. Finally, measured LET values were compared to the true values determined using Monte Carlo simulations. Results: The CW-OSL curve shape, P-OSL curve shape and the ultraviolet/blue-ratio provided proton LET estimates within 12.4%, 5.7% and 30.9% of the true values, respectively. Conclusion: We have demonstrated that LET can be measured within 5.7% using Al2O3:C OSLDs in the therapeutic proton beams used in this investigation. From a single OSLD readout, it is possible to measure both the absorbed dose and LET. This has potential future applications in proton therapy quality assurance, particularly for treatment plans based on optimization of LET distributions. This research was partially supported by the Natural Sciences and Engineering Research Council of Canada.« less
Determination of Flavonoids in Wine by High Performance Liquid Chromatography
NASA Astrophysics Data System (ADS)
da Queija, Celeste; Queirós, M. A.; Rodrigues, Ligia M.
2001-02-01
The experiment presented is an application of HPLC to the analysis of flavonoids in wines, designed for students of instrumental methods. It is done in two successive 4-hour laboratory sessions. While the hydrolysis of the wines is in progress, the students prepare the calibration curves with standard solutions of flavonoids and calculate the regression lines and correlation coefficients. During the second session they analyze the hydrolyzed wine samples and calculate the concentrations of the flavonoids using the calibration curves obtained earlier. This laboratory work is very attractive to students because they deal with a common daily product whose components are reported to have preventive and therapeutic effects. Furthermore, students can execute preparative work and apply a more elaborate technique that is nowadays an indispensable tool in instrumental analysis.
Light curves of flat-spectrum radio sources (Jenness+, 2010)
NASA Astrophysics Data System (ADS)
Jenness, T.; Robson, E. I.; Stevens, J. A.
2010-05-01
Calibrated data for 143 flat-spectrum extragalactic radio sources are presented at a wavelength of 850um covering a 5-yr period from 2000 April. The data, obtained at the James Clerk Maxwell Telescope using the Submillimetre Common-User Bolometer Array (SCUBA) camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control - Data Reduction (ORAC-DR) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves. A re-analysis of previously published data from 1997 to 2000 is also presented. The combined catalogue, comprising 10493 flux density measurements, provides a unique and valuable resource for studies of extragalactic radio sources. (2 data files).
Calibration method for a large-scale structured light measurement system.
Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken
2017-05-10
The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Grafton, S. B.; Lutze, F. H.
1981-01-01
The test capabilities of the Stability Wind Tunnel of the Virginia Polytechnic Institute and State University are described, and calibrations for curved and rolling flow techniques are given. Oscillatory snaking tests to determine pure yawing derivatives are considered. Representative aerodynamic data obtained for a current fighter configuration using the curved and rolling flow techniques are presented. The application of dynamic derivatives obtained in such tests to the analysis of airplane motions in general, and to high angle of attack flight conditions in particular, is discussed.
Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards
NASA Technical Reports Server (NTRS)
Remer, D. S.; Moore, R. C.
1985-01-01
Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.
Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M
2018-08-17
Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.
Utility-based designs for randomized comparative trials with categorical outcomes
Murray, Thomas A.; Thall, Peter F.; Yuan, Ying
2016-01-01
A general utility-based testing methodology for design and conduct of randomized comparative clinical trials with categorical outcomes is presented. Numerical utilities of all elementary events are elicited to quantify their desirabilities. These numerical values are used to map the categorical outcome probability vector of each treatment to a mean utility, which is used as a one-dimensional criterion for constructing comparative tests. Bayesian tests are presented, including fixed sample and group sequential procedures, assuming Dirichlet-multinomial models for the priors and likelihoods. Guidelines are provided for establishing priors, eliciting utilities, and specifying hypotheses. Efficient posterior computation is discussed, and algorithms are provided for jointly calibrating test cutoffs and sample size to control overall type I error and achieve specified power. Asymptotic approximations for the power curve are used to initialize the algorithms. The methodology is applied to re-design a completed trial that compared two chemotherapy regimens for chronic lymphocytic leukemia, in which an ordinal efficacy outcome was dichotomized and toxicity was ignored to construct the trial’s design. The Bayesian tests also are illustrated by several types of categorical outcomes arising in common clinical settings. Freely available computer software for implementation is provided. PMID:27189672
Hatten, James R.; Batt, Thomas R.
2010-01-01
We used a two-dimensional (2D) hydrodynamic model to simulate and compare the hydraulic characteristics in a 74-km reach of the Columbia River (the Bonneville Reach) before and after construction of Bonneville Dam. For hydrodynamic modeling, we created a bathymetric layer of the Bonneville Reach from single-beam and multi-beam echo-sounder surveys, digital elevation models, and navigation surveys. We calibrated the hydrodynamic model at 100 and 300 kcfs with a user-defined roughness layer, a variable-sized mesh, and a U.S. Army Corps of Engineers backwater curve. We verified the 2D model with acoustic Doppler current profiler (ADCP) data at 14 transects and three flows. The 2D model was 88% accurate for water depths, and 77% accurate for velocities. We verified a pre-dam 2D model run at 126 kcfs using pre-dam aerial photos from September 1935. Hydraulic simulations indicated that mean water depths in the Bonneville Reach increased by 34% following dam construction, while mean velocities decreased by 58%. There are numerous activities that would benefit from data output from the 2D model, including biological sampling, bioenergetics, and spatially explicit habitat modeling.
Using Peano Curves to Construct Laplacians on Fractals
NASA Astrophysics Data System (ADS)
Molitor, Denali; Ott, Nadia; Strichartz, Robert
2015-12-01
We describe a new method to construct Laplacians on fractals using a Peano curve from the circle onto the fractal, extending an idea that has been used in the case of certain Julia sets. The Peano curve allows us to visualize eigenfunctions of the Laplacian by graphing the pullback to the circle. We study in detail three fractals: the pentagasket, the octagasket and the magic carpet. We also use the method for two nonfractal self-similar sets, the torus and the equilateral triangle, obtaining appealing new visualizations of eigenfunctions on the triangle. In contrast to the many familiar pictures of approximations to standard Peano curves, that do no show self-intersections, our descriptions of approximations to the Peano curves have self-intersections that play a vital role in constructing graph approximations to the fractal with explicit graph Laplacians that give the fractal Laplacian in the limit.
NASA Astrophysics Data System (ADS)
Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin
2018-04-01
Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium, and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a “double calibration curve” method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.
The Importance of Calibration in Clinical Psychology.
Lindhiem, Oliver; Petersen, Isaac T; Mentch, Lucas K; Youngstrom, Eric A
2018-02-01
Accuracy has several elements, not all of which have received equal attention in the field of clinical psychology. Calibration, the degree to which a probabilistic estimate of an event reflects the true underlying probability of the event, has largely been neglected in the field of clinical psychology in favor of other components of accuracy such as discrimination (e.g., sensitivity, specificity, area under the receiver operating characteristic curve). Although it is frequently overlooked, calibration is a critical component of accuracy with particular relevance for prognostic models and risk-assessment tools. With advances in personalized medicine and the increasing use of probabilistic (0% to 100%) estimates and predictions in mental health research, the need for careful attention to calibration has become increasingly important.
NASA Astrophysics Data System (ADS)
Grindlay, J.; Tang, S.; Simcoe, R.; Laycock, S.; Los, E.; Mink, D.; Doane, A.; Champine, G.
2009-08-01
The temporal Universe is now possible to study on previously inaccessible timescales of days to decades, over a full century, with the planned full-digitization of the Harvard plate collection. The Digital Access to a Sky Century @ Harvard (DASCH) project has developed the world's highest-speed precision plate scanner and the required software to digitize the ˜500,000 glass photographic plates (mostly 20 x 25~cm) that record images of the full sky taken by some 20 telescopes in both hemispheres over the period 1880 - 1985. These provide ˜500-1000 measures of any object brighter than the plate limit (typically B ˜14 - 17) with photometric accuracy from the digital image typically Δm ˜0.10 - 0.15 mag, with the presently developed photometry pipeline and spatially-dependent calibration (using the Hubble Guide Star Catalog) for each plate. We provide an overview of DASCH, the processing, and example light curves that illustrate the power of this unique dataset and resource. Production scanning and serving on-line the entire ˜1 PB database (both images and derived light curves) on spinning disk could be completed within ˜3 - 5 y after funding (for scanner operations and database construction) is obtained.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
NASA Astrophysics Data System (ADS)
Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi
2014-06-01
This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.
Wafa Chouaib; Peter V. Caldwell; Younes Alila
2018-01-01
This paper advances the physical understanding of the flow duration curve (FDC) regional variation. It provides a process-based analysis of the interaction between climate and landscape properties to explain disparities in FDC shapes. We used (i) long term measured flow and precipitation data over 73 catchments from the eastern US. (ii) We calibrated the...
NASA Astrophysics Data System (ADS)
Wright, David; Thyer, Mark; Westra, Seth
2015-04-01
Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this study establish the feasibility and importance of including influential point detection diagnostics as a standard tool in hydrological model calibration. They provide the hydrologist with important information on whether model calibration is susceptible to a small number of highly influent data points. This enables the hydrologist to make a more informed decision of whether to (1) remove/retain the calibration data; (2) adjust the calibration strategy and/or hydrological model to reduce the susceptibility of model predictions to a small number of influential observations.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Implementation of straight and curved steel girder erection design tools construction : summary.
DOT National Transportation Integrated Search
2010-11-05
Project 0-5574 Curved Plate Girder Design for Safe and Economical Construction, resulted in the : development of two design tools, UT Lift and UT Bridge. UT Lift is a spreadsheet-based program for analyzing : steel girders during lifting while ...
The precision of a special purpose analog computer in clinical cardiac output determination.
Sullivan, F J; Mroz, E A; Miller, R E
1975-01-01
Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394
High Performance Liquid Chromatography of Vitamin A: A Quantitative Determination.
ERIC Educational Resources Information Center
Bohman, Ove; And Others
1982-01-01
Experimental procedures are provided for the quantitative determination of Vitamin A (retinol) in food products by analytical liquid chromatography. Standard addition and calibration curve extraction methods are outlined. (SK)
NASA Astrophysics Data System (ADS)
Williams, Ammon Ned
The primary objective of this research is to develop an applied technology and provide an assessment for remotely measuring and analyzing the real time or near real time concentrations of used nuclear fuel (UNF) elements in electroreners (ER). Here, Laser-Induced Breakdown Spectroscopy (LIBS) in UNF pyroprocessing facilities was investigated. LIBS is an elemental analysis method, which is based on the emission from plasma generated by focusing a laser beam into the medium. This technology has been reported to be applicable in solids, liquids (includes molten metals), and gases for detecting elements of special nuclear materials. The advantages of applying the technology for pyroprocessing facilities are: (i) Rapid real-time elemental analysis; (ii) Direct detection of elements and impurities in the system with low limits of detection (LOD); and (iii) Little to no sample preparation is required. One important challenge to overcome is achieving reproducible spectral data over time while being able to accurately quantify fission products, rare earth elements, and actinides in the molten salt. Another important challenge is related to the accessibility of molten salt, which is heated in a heavily insulated, remotely operated furnace in a high radiation environment within an argon gas atmosphere. This dissertation aims to address these challenges and approaches in the following phases with their highlighted outcomes: 1. Aerosol-LIBS system design and aqueous testing: An aerosol-LIBS system was designed around a Collison nebulizer and tested using deionized water with Ce, Gd, and Nd concentrations from 100 ppm to 10,000 ppm. The average %RSD values between the sample repetitions were 4.4% and 3.8% for the Ce and Gd lines, respectively. The univariate calibration curve for Ce using the peak intensities of the Ce 418.660 nm line was recommended and had an R 2 value, LOD, and RMSECV of 0.994, 189 ppm, and 390 ppm, respectively. The recommended Gd calibration curve was generated using the peak areas of the Gd 409.861 nm line and had an R2, LOD, and RMSECV of 0.992, 316 ppm, and 421 ppm, respectively. The partial least squares (PLS) calibration curves yielded similar results with RMSECV of 406 ppm and 417 ppm for the Ce and Gd curves, respectively. 2. High temperature aerosol-LIBS system design and CeCl3 testing: The aerosol-LIBS system was transitioned to a high temperature and used to measure Ce in molten LiCl-KCl salt within a glovebox environment. The concentration range studied was from 0.1 wt% to 5 wt% Ce. Normalization was necessary due to signal degradation over time; however, with the normalization the %RSD values averaged 5% for the mid and upper concentrations studied. The best univariate calibration curve was generated using the peak areas of the Ce 418.660 nm line. The LOD for this line was 148 ppm with the RMSECV of 647 ppm. The PLS calibration curve was made using 7 latent variables (LV) and resulting in the RMSECV of 622 ppm. The LOD value was below the expected rare earth concentration within the ER. 3. Aerosol-LIBS testing using UCl3: Samples containing UCl 3 with concentrations ranging from 0.3 wt% to 5 wt% were measured. The spectral response in this range was linear. The best univariate calibration curves were generated using the peak areas of the U 367.01 nm line and had an R2 value of 0.9917. Here, the LOD was 647 ppm and the RMSECV was 2,290 ppm. The PLS model was substantially better with a RMSECV of 1,110 ppm. The LOD found here is below the expected U concentrations in the ER. The successful completion of this study has demonstrated the feasibility of using an aerosol-LIBS analytical technique to measure rare earth elements and actinides in the pyroprocessing salt.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roach, Dennis Patrick; Rackow, Kirk A.
The FAA's Airworthiness Assurance NDI Validation Center, in conjunction with the Commercial Aircraft Composite Repair Committee, developed a set of composite reference standards to be used in NDT equipment calibration for accomplishment of damage assessment and post-repair inspection of all commercial aircraft composites. In this program, a series of NDI tests on a matrix of composite aircraft structures and prototype reference standards were completed in order to minimize the number of standards needed to carry out composite inspections on aircraft. Two tasks, related to composite laminates and non-metallic composite honeycomb configurations, were addressed. A suite of 64 honeycomb panels, representingmore » the bounding conditions of honeycomb construction on aircraft, was inspected using a wide array of NDI techniques. An analysis of the resulting data determined the variables that play a key role in setting up NDT equipment. This has resulted in a set of minimum honeycomb NDI reference standards that include these key variables. A sequence of subsequent tests determined that this minimum honeycomb reference standard set is able to fully support inspections over the full range of honeycomb construction scenarios found on commercial aircraft. In the solid composite laminate arena, G11 Phenolic was identified as a good generic solid laminate reference standard material. Testing determined matches in key velocity and acoustic impedance properties, as well as, low attenuation relative to carbon laminates. Furthermore, comparisons of resonance testing response curves from the G11 Phenolic NDI reference standard was very similar to the resonance response curves measured on the existing carbon and fiberglass laminates. NDI data shows that this material should work for both pulse-echo (velocity-based) and resonance (acoustic impedance-based) inspections.« less
Abend, M; Badie, C; Quintens, R; Kriehuber, R; Manning, G; Macaeva, E; Njima, M; Oskamp, D; Strunz, S; Moertl, S; Doucha-Senf, S; Dahlke, S; Menzel, J; Port, M
2016-02-01
The risk of a large-scale event leading to acute radiation exposure necessitates the development of high-throughput methods for providing rapid individual dose estimates. Our work addresses three goals, which align with the directive of the European Union's Realizing the European Network of Biodosimetry project (EU-RENB): 1. To examine the suitability of different gene expression platforms for biodosimetry purposes; 2. To perform this examination using blood samples collected from prostate cancer patients (in vivo) and from healthy donors (in vitro); and 3. To compare radiation-induced gene expression changes of the in vivo with in vitro blood samples. For the in vitro part of this study, EDTA-treated whole blood was irradiated immediately after venipuncture using single X-ray doses (1 Gy/min(-1) dose rate, 100 keV). Blood samples used to generate calibration curves as well as 10 coded (blinded) samples (0-4 Gy dose range) were incubated for 24 h in vitro, lysed and shipped on wet ice. For the in vivo part of the study PAXgene tubes were used and peripheral blood (2.5 ml) was collected from prostate cancer patients before and 24 h after the first fractionated 2 Gy dose of localized radiotherapy to the pelvis [linear accelerator (LINAC), 580 MU/min, exposure 1-1.5 min]. Assays were run in each laboratory according to locally established protocols using either microarray platforms (2 laboratories) or qRT-PCR (2 laboratories). Report times on dose estimates were documented. The mean absolute difference of estimated doses relative to the true doses (Gy) were calculated. Doses were also merged into binary categories reflecting aspects of clinical/diagnostic relevance. For the in vitro part of the study, the earliest report time on dose estimates was 7 h for qRT-PCR and 35 h for microarrays. Methodological variance of gene expression measurements (CV ≤10% for technical replicates) and interindividual variance (≤twofold for all genes) were low. Dose estimates based on one gene, ferredoxin reductase (FDXR), using qRT-PCR were as precise as dose estimates based on multiple genes using microarrays, but the precision decreased at doses ≥2 Gy. Binary dose categories comprising, for example, unexposed compared with exposed samples, could be completely discriminated with most of our methods. Exposed prostate cancer blood samples (n = 4) could be completely discriminated from unexposed blood samples (n = 4, P < 0.03, two-sided Fisher's exact test) without individual controls. This could be performed by introducing an in vitro-to-in vivo correction factor of FDXR, which varied among the laboratories. After that the in vitro-constructed calibration curves could be used for dose estimation of the in vivo exposed prostate cancer blood samples within an accuracy window of ±0.5 Gy in both contributing qRT-PCR laboratories. In conclusion, early and precise dose estimates can be performed, in particular at doses ≤2 Gy in vitro. Blood samples of prostate cancer patients exposed to 0.09-0.017 Gy could be completely discriminated from pre-exposure blood samples with the doses successfully estimated using adjusted in vitro-constructed calibration curves.
Nuclear moisture-density evaluation.
DOT National Transportation Integrated Search
1964-11-01
This report constitutes the results of a series of calibration curves prepared by comparing the Troxler Nuclear Density - Moisture Gauge count ratios with conventional densities as obtained by the Soiltest Volumeter and the sand displacement methods....
Towards Robust Self-Calibration for Handheld 3d Line Laser Scanning
NASA Astrophysics Data System (ADS)
Bleier, M.; Nüchter, A.
2017-11-01
This paper studies self-calibration of a structured light system, which reconstructs 3D information using video from a static consumer camera and a handheld cross line laser projector. Intersections between the individual laser curves and geometric constraints on the relative position of the laser planes are exploited to achieve dense 3D reconstruction. This is possible without any prior knowledge of the movement of the projector. However, inaccurrately extracted laser lines introduce noise in the detected intersection positions and therefore distort the reconstruction result. Furthermore, when scanning objects with specular reflections, such as glossy painted or metalic surfaces, the reflections are often extracted from the camera image as erroneous laser curves. In this paper we investiagte how robust estimates of the parameters of the laser planes can be obtained despite of noisy detections.
Calibration and validation of a general infiltration model
NASA Astrophysics Data System (ADS)
Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.
1999-08-01
A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.
Prototype Stilbene Neutron Collar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, M. K.; Shumaker, D.; Snyderman, N.
2016-10-26
A neutron collar using stilbene organic scintillator cells for fast neutron counting is described for the assay of fresh low enriched uranium (LEU) fuel assemblies. The prototype stilbene collar has a form factor similar to standard He-3 based collars and uses an AmLi interrogation neutron source. This report describes the simulation of list mode neutron correlation data on various fuel assemblies including some with neutron absorbers (burnable Gd poisons). Calibration curves (doubles vs 235U linear mass density) are presented for both thermal and fast (with Cd lining) modes of operation. It is shown that the stilbene collar meets or exceedsmore » the current capabilities of He-3 based neutron collars. A self-consistent assay methodology, uniquely suited to the stilbene collar, using triples is described which complements traditional assay based on doubles calibration curves.« less
Measuring Breath Alcohol Concentrations with an FTIR Spectrometer
NASA Astrophysics Data System (ADS)
Kneisel, Adam; Bellamy, Michael K.
2003-12-01
An FTIR spectrometer equipped with a long-path gas cell can be used to measure breath alcohol concentrations in an instrumental analysis laboratory course. Students use aqueous ethanol solutions to make a calibration curve that relates absorbance signals of breath samples with blood alcohol concentrations. Students use their calibration curve to determine the time needed for their calculated blood alcohol levels to drop below the legal limit following use of a commercial mouthwash. They also calculate their blood alcohol levels immediately after chewing bread. The main goal of the experiment is to provide the students with an interesting laboratory exercise that teaches them about infrared spectrometers. While the results are meant to be only semiquantitative, they have compared well with results from other published studies. A reference is included that describes how to fabricate a long-path gas cell.
Improvement of immunoassay detection system by using alternating current magnetic susceptibility
NASA Astrophysics Data System (ADS)
Kawabata, R.; Mizoguchi, T.; Kandori, A.
2016-03-01
A major goal with this research was to develop a low-cost and highly sensitive immunoassay detection system by using alternating current (AC) magnetic susceptibility. We fabricated an improved prototype of our previously developed immunoassay detection system and evaluated its performance. The prototype continuously moved sample containers by using a magnetically shielded brushless motor, which passes between two anisotropic magneto resistance (AMR) sensors. These sensors detected the magnetic signal in the direction where each sample container passed them. We used the differential signal obtained from each AMR sensor's output to improve the signal-to-noise ratio (SNR) of the magnetic signal measurement. Biotin-conjugated polymer beads with avidin-coated magnetic particles were prepared to examine the calibration curve, which represents the relation between AC magnetic susceptibility change and polymer-bead concentration. For the calibration curve measurement, we, respectively, measured the magnetic signal caused by the magnetic particles by using each AMR sensor installed near the upper or lower part in the lateral position of the passing sample containers. As a result, the SNR of the prototype was 4.5 times better than that of our previous system. Moreover, the data obtained from each AMR sensor installed near the upper part in the lateral position of the passing sample containers exhibited an accurate calibration curve that represented good correlation between AC magnetic susceptibility change and polymer-bead concentration. The conclusion drawn from these findings is that our improved immunoassay detection system will enable a low-cost and highly sensitive immunoassay.
Annually resolved atmospheric radiocarbon records reconstructed from tree-rings
NASA Astrophysics Data System (ADS)
Wacker, Lukas; Bleicher, Niels; Büntgen, Ulf; Friedrich, Michael; Friedrich, Ronny; Diego Galván, Juan; Hajdas, Irka; Jull, Anthony John; Kromer, Bernd; Miyake, Fusa; Nievergelt, Daniel; Reinig, Frederick; Sookdeo, Adam; Synal, Hans-Arno; Tegel, Willy; Wesphal, Torsten
2017-04-01
The IntCal13 calibration curve is mainly based on data measured by decay counting with a resolution of 10 years. Thus high frequency changes like the 11-year solar cycles or cosmic ray events [1] are not visible, or at least not to their full extent. New accelerator mass spectrometry (AMS) systems today are capable of measuring at least as precisely as decay counters [2], with the advantage of using 1000 times less material. The low amount of material required enables more efficient sample preparation. Thus, an annually resolved re-measurement of the tree-ring based calibration curve can now be envisioned. We will demonstrate with several examples the multitude of benefits resulting from annually resolved radiocarbon records from tree-rings. They will not only allow for more precise radiocarbon dating but also contain valuable new astrophysical information. The examples shown will additionally indicate that it can be critical to compare AMS measurements with a calibration curve that is mainly based on decay counting. We often see small offsets between the two measurement techniques, while the reason is yet unknown. [1] Miyake F, Nagaya K, Masuda K, Nakamura T. 2012. A signature of cosmic-ray increase in AD 774-775 from tree rings in Japan. Nature 486(7402):240-2. [2] Wacker L, Bonani G, Friedrich M, Hajdas I, Kromer B, Nemec M, Ruff M, Suter M, Synal H-A, Vockenhuber C. 2010. MICADAS: Routine and high-precision radiocarbon dating. Radiocarbon 52(2):252-62.
Improvement of immunoassay detection system by using alternating current magnetic susceptibility.
Kawabata, R; Mizoguchi, T; Kandori, A
2016-03-01
A major goal with this research was to develop a low-cost and highly sensitive immunoassay detection system by using alternating current (AC) magnetic susceptibility. We fabricated an improved prototype of our previously developed immunoassay detection system and evaluated its performance. The prototype continuously moved sample containers by using a magnetically shielded brushless motor, which passes between two anisotropic magneto resistance (AMR) sensors. These sensors detected the magnetic signal in the direction where each sample container passed them. We used the differential signal obtained from each AMR sensor's output to improve the signal-to-noise ratio (SNR) of the magnetic signal measurement. Biotin-conjugated polymer beads with avidin-coated magnetic particles were prepared to examine the calibration curve, which represents the relation between AC magnetic susceptibility change and polymer-bead concentration. For the calibration curve measurement, we, respectively, measured the magnetic signal caused by the magnetic particles by using each AMR sensor installed near the upper or lower part in the lateral position of the passing sample containers. As a result, the SNR of the prototype was 4.5 times better than that of our previous system. Moreover, the data obtained from each AMR sensor installed near the upper part in the lateral position of the passing sample containers exhibited an accurate calibration curve that represented good correlation between AC magnetic susceptibility change and polymer-bead concentration. The conclusion drawn from these findings is that our improved immunoassay detection system will enable a low-cost and highly sensitive immunoassay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Y; Lin, Y; Tsai, C
Purpose: The objective of this study is to develop quantitative calibration between image quality indexes and iodine concentration with dual-energy (DE) contrast-enhanced digital mammography (CEDM) techniques and further serve as the assistance for diagnosis. Methods: Custom-made acrylic phantom with dimensions of 24×30 cm{sup 2} simulated breast thickness from 2 to 6 cm was used in the calibration. The phantom contained matrix of four times four holes of 3 mm deep with a diameter of 15 mm for filling contrast agent with area density ranged from 0.1 to 10 mg/cm{sup 2}. All the image acquisitions were performed on a full-field digitalmore » mammography system (Senographe Essential, GE) with dual energy acquisitions. Mean pixel value (MPV), and contrast-to-noise ratio (CNR) were used for evaluating the relationship between image quality indexes and iodine concentration. Iodine map and CNR map could further be constructed with these calibration curves applied pixel by pixel utilized MATLAB software. Minimum iodine concentration could also be calculated with the visibility threshold of CNR=5 according the Rose model. Results: When evaluating the DE subtraction images, MPV increased linearly as the iodine concentration increased with all the phantom thickness surveyed (R{sup 2} between 0.989 and 0.992). Lesions with increased iodine uptake could thus be enhanced in the color-encoded iodine maps, and the mean iodine concentration could be obtained through the ROI measurements. As for investigating CNR performance, linear relationships were also shown between the iodine concentration and CNR (R{sup 2} between 0.983 and 0.990). Minimum iodine area density of 1.45, 1.73, 1.80, 1.73 and 1.72 mg/cm{sup 2} for phantom thickness of 2, 3, 4, 5, 6 cm were calculated based on Rose’s visualization criteria. Conclusion: Quantitative calibration between image quality indexes and iodine concentrations may further serving as the assistance for analyzing contrast enhancement for patient participating the dual energy CEDM procedures.« less
Luthi, François; Deriaz, Olivier; Vuistiner, Philippe; Burrus, Cyrille; Hilfiker, Roger
2014-01-01
Workers with persistent disabilities after orthopaedic trauma may need occupational rehabilitation. Despite various risk profiles for non-return-to-work (non-RTW), there is no available predictive model. Moreover, injured workers may have various origins (immigrant workers), which may either affect their return to work or their eligibility for research purposes. The aim of this study was to develop and validate a predictive model that estimates the likelihood of non-RTW after occupational rehabilitation using predictors which do not rely on the worker's background. Prospective cohort study (3177 participants, native (51%) and immigrant workers (49%)) with two samples: a) Development sample with patients from 2004 to 2007 with Full and Reduced Models, b) External validation of the Reduced Model with patients from 2008 to March 2010. We collected patients' data and biopsychosocial complexity with an observer rated interview (INTERMED). Non-RTW was assessed two years after discharge from the rehabilitation. Discrimination was assessed by the area under the receiver operating curve (AUC) and calibration was evaluated with a calibration plot. The model was reduced with random forests. At 2 years, the non-RTW status was known for 2462 patients (77.5% of the total sample). The prevalence of non-RTW was 50%. The full model (36 items) and the reduced model (19 items) had acceptable discrimination performance (AUC 0.75, 95% CI 0.72 to 0.78 and 0.74, 95% CI 0.71 to 0.76, respectively) and good calibration. For the validation model, the discrimination performance was acceptable (AUC 0.73; 95% CI 0.70 to 0.77) and calibration was also adequate. Non-RTW may be predicted with a simple model constructed with variables independent of the patient's education and language fluency. This model is useful for all kinds of trauma in order to adjust for case mix and it is applicable to vulnerable populations like immigrant workers.
Luthi, François; Deriaz, Olivier; Vuistiner, Philippe; Burrus, Cyrille; Hilfiker, Roger
2014-01-01
Background Workers with persistent disabilities after orthopaedic trauma may need occupational rehabilitation. Despite various risk profiles for non-return-to-work (non-RTW), there is no available predictive model. Moreover, injured workers may have various origins (immigrant workers), which may either affect their return to work or their eligibility for research purposes. The aim of this study was to develop and validate a predictive model that estimates the likelihood of non-RTW after occupational rehabilitation using predictors which do not rely on the worker’s background. Methods Prospective cohort study (3177 participants, native (51%) and immigrant workers (49%)) with two samples: a) Development sample with patients from 2004 to 2007 with Full and Reduced Models, b) External validation of the Reduced Model with patients from 2008 to March 2010. We collected patients’ data and biopsychosocial complexity with an observer rated interview (INTERMED). Non-RTW was assessed two years after discharge from the rehabilitation. Discrimination was assessed by the area under the receiver operating curve (AUC) and calibration was evaluated with a calibration plot. The model was reduced with random forests. Results At 2 years, the non-RTW status was known for 2462 patients (77.5% of the total sample). The prevalence of non-RTW was 50%. The full model (36 items) and the reduced model (19 items) had acceptable discrimination performance (AUC 0.75, 95% CI 0.72 to 0.78 and 0.74, 95% CI 0.71 to 0.76, respectively) and good calibration. For the validation model, the discrimination performance was acceptable (AUC 0.73; 95% CI 0.70 to 0.77) and calibration was also adequate. Conclusions Non-RTW may be predicted with a simple model constructed with variables independent of the patient’s education and language fluency. This model is useful for all kinds of trauma in order to adjust for case mix and it is applicable to vulnerable populations like immigrant workers. PMID:24718689
NASA Astrophysics Data System (ADS)
Qi, Pan; Shao, Wenbin; Liao, Shusheng
2016-02-01
For quantitative defects detection research on heat transfer tube in nuclear power plants (NPP), two parts of work are carried out based on the crack as the main research objects. (1) Production optimization of calibration tube. Firstly, ASME, RSEM and homemade crack calibration tubes are applied to quantitatively analyze the defects depth on other designed crack test tubes, and then the judgment with quantitative results under crack calibration tube with more accuracy is given. Base on that, weight analysis of influence factors for crack depth quantitative test such as crack orientation, length, volume and so on can be undertaken, which will optimize manufacture technology of calibration tubes. (2) Quantitative optimization of crack depth. Neural network model with multi-calibration curve adopted to optimize natural crack test depth generated in in-service tubes shows preliminary ability to improve quantitative accuracy.
gPhoton: Time-tagged GALEX photon events analysis tools
NASA Astrophysics Data System (ADS)
Million, Chase C.; Fleming, S. W.; Shiao, B.; Loyd, P.; Seibert, M.; Smith, M.
2016-03-01
Written in Python, gPhoton calibrates and sky-projects the ~1.1 trillion ultraviolet photon events detected by the microchannel plates on the Galaxy Evolution Explorer Spacecraft (GALEX), archives these events in a publicly accessible database at the Mikulski Archive for Space Telescopes (MAST), and provides tools for working with the database to extract scientific results, particularly over short time domains. The software includes a re-implementation of core functionality of the GALEX mission calibration pipeline to produce photon list files from raw spacecraft data as well as a suite of command line tools to generate calibrated light curves, images, and movies from the MAST database.
An Accurate Projector Calibration Method Based on Polynomial Distortion Representation
Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua
2015-01-01
In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247
Wiele, Stephen M.; Torizzo, Margaret
2003-01-01
A method was developed to construct stage-discharge rating curves for the Colorado River in Grand Canyon, Arizona, using two stage-discharge pairs and a stage-normalized rating curve. Stage-discharge rating curves formulated with the stage-normalized curve method are compared to (1) stage-discharge rating curves for six temporary stage gages and two streamflow-gaging stations developed by combining stage records with modeled unsteady flow; (2) stage-discharge rating curves developed from stage records and discharge measurements at three streamflow-gaging stations; and (3) stages surveyed at known discharges at the Northern Arizona Sand Bar Studies sites. The stage-normalized curve method shows good agreement with field data when the discharges used in the construction of the rating curves are at least 200 cubic meters per second apart. Predictions of stage using the stage-normalized curve method are also compared to predictions of stage from a steady-flow model.
Chao, Shih-Wei; Li, Arvin Huang-Te; Chao, Sheng D
2009-09-01
Intermolecular interaction energy data for the methane dimer have been calculated at a spectroscopic accuracy and employed to construct an ab initio potential energy surface (PES) for molecular dynamics (MD) simulations of fluid methane properties. The full potential curves of the methane dimer at 12 symmetric conformations were calculated by the supermolecule counterpoise-corrected second-order Møller-Plesset (MP2) perturbation theory. Single-point coupled cluster with single and double and perturbative triple excitations [CCSD(T)] calculations were also carried out to calibrate the MP2 potentials. We employed Pople's medium size basis sets [up to 6-311++G(3df, 3pd)] and Dunning's correlation consistent basis sets (cc-pVXZ and aug-cc-pVXZ, X = D, T, Q). For each conformer, the intermolecular carbon-carbon separation was sampled in a step 0.1 A for a range of 3-9 A, resulting in a total of 732 configuration points calculated. The MP2 binding curves display significant anisotropy with respect to the relative orientations of the dimer. The potential curves at the complete basis set (CBS) limit were estimated using well-established analytical extrapolation schemes. A 4-site potential model with sites located at the hydrogen atoms was used to fit the ab initio potential data. This model stems from a hydrogen-hydrogen repulsion mechanism to explain the stability of the dimer structure. MD simulations using the ab initio PES show quantitative agreements on both the atom-wise radial distribution functions and the self-diffusion coefficients over a wide range of experimental conditions. Copyright 2008 Wiley Periodicals, Inc.
Development of a sensitive monitor for hydrazine
NASA Technical Reports Server (NTRS)
Eiceman, Gary A.; Limero, Thomas; James, John T.
1991-01-01
The development of hand-held, ambient-temperature instruments that utilize ion mobility spectrometry (IMS) in the detection of hydrazine and monomethylhydrazine is reviewed. A development effort to eliminate ammonia interference through altering the ionization chemistry, based on adding 5-nonanone as dopant in the ionization region of the IMS, is presented. Calibration of this instrument conducted before and after STS-37 revealed no more than a 5 percent difference between calibration curves, without any appreciable loss of equipment function.
Automated Blazar Light Curves Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Spencer James
2017-07-27
This presentation describes a problem and methodology pertaining to automated blazar light curves. Namely, optical variability patterns for blazars require the construction of light curves and in order to generate the light curves, data must be filtered before processing to ensure quality.
Baena-Díez, José Miguel; Subirana, Isaac; Ramos, Rafael; Gómez de la Cámara, Agustín; Elosua, Roberto; Vila, Joan; Marín-Ibáñez, Alejandro; Guembe, María Jesús; Rigo, Fernando; Tormo-Díaz, María José; Moreno-Iribas, Conchi; Cabré, Joan Josep; Segura, Antonio; Lapetra, José; Quesada, Miquel; Medrano, María José; González-Diego, Paulino; Frontera, Guillem; Gavrila, Diana; Ardanaz, Eva; Basora, Josep; García, José María; García-Lareo, Manel; Gutiérrez-Fuentes, José Antonio; Mayoral, Eduardo; Sala, Joan; Dégano, Irene R; Francès, Albert; Castell, Conxa; Grau, María; Marrugat, Jaume
2018-04-01
To assess the validity of the original low-risk SCORE function without and with high-density lipoprotein cholesterol and SCORE calibrated to the Spanish population. Pooled analysis with individual data from 12 Spanish population-based cohort studies. We included 30 919 individuals aged 40 to 64 years with no history of cardiovascular disease at baseline, who were followed up for 10 years for the causes of death included in the SCORE project. The validity of the risk functions was analyzed with the area under the ROC curve (discrimination) and the Hosmer-Lemeshow test (calibration), respectively. Follow-up comprised 286 105 persons/y. Ten-year cardiovascular mortality was 0.6%. The ratio between estimated/observed cases ranged from 9.1, 6.5, and 9.1 in men and 3.3, 1.3, and 1.9 in women with original low-risk SCORE risk function without and with high-density lipoprotein cholesterol and calibrated SCORE, respectively; differences were statistically significant with the Hosmer-Lemeshow test between predicted and observed mortality with SCORE (P < .001 in both sexes and with all functions). The area under the ROC curve with the original SCORE was 0.68 in men and 0.69 in women. All versions of the SCORE functions available in Spain significantly overestimate the cardiovascular mortality observed in the Spanish population. Despite the acceptable discrimination capacity, prediction of the number of fatal cardiovascular events (calibration) was significantly inaccurate. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Calibration of a Fusion Experiment to Investigate the Nuclear Caloric Curve
NASA Astrophysics Data System (ADS)
Keeler, Ashleigh
2017-09-01
In order to investigate the nuclear equation of state (EoS), the relation between two thermodynamic quantities can be examined. The correlation between the temperature and excitation energy of a nucleus, also known as the caloric curve, has been previously observed in peripheral heavy-ion collisions to exhibit a dependence on the neutron-proton asymmetry. To further investigate this result, fusion reactions (78Kr + 12C and 86Kr + 12C) were measured; the beam energy was varied in the range 15-35 MeV/u in order to vary the excitation energy. The light charged particles (LCPs) evaporated from the compound nucleus were measured in the Si-CsI(TI)/PD detector array FAUST (Forward Array Using Silicon Technology). The LCPs carry information about the temperature. The calibration of FAUST will be described in this presentation. The silicon detectors have resistive surfaces in perpendicular directions to allow position measurement of the LCP's to better than 200 um. The resistive nature requires a position-dependent correction to the energy calibration to take full advantage of the energy resolution. The momentum is calculated from the energy of these particles, and their position on the detectors. A parameterized formula based on the Bethe-Bloch equation was used to straighten the particle identification (PID) lines measured with the dE-E technique. The energy calibration of the CsI detectors is based on the silicon detector energy calibration and the PID. A precision slotted mask enables the relative positions of the detectors to be determined. DOE Grant: DE-FG02-93ER40773 and REU Grant: PHY - 1659847.
The effect of tropospheric fluctuations on the accuracy of water vapor radiometry
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1992-01-01
Line-of-sight path delay calibration accuracies of 1 mm are needed to improve both angular and Doppler tracking capabilities. Fluctuations in the refractivity of tropospheric water vapor limit the present accuracies to about 1 nrad for the angular position and to a delay rate of 3x10(exp -13) sec/sec over a 100-sec time interval for Doppler tracking. This article describes progress in evaluating the limitations of the technique of water vapor radiometry at the 1-mm level. The two effects evaluated here are: (1) errors arising from tip-curve calibration of WVR's in the presence of tropospheric fluctuations and (2) errors due to the use of nonzero beamwidths for water vapor radiometer (WVR) horns. The error caused by tropospheric water vapor fluctuations during instrument calibration from a single tip curve is 0.26 percent in the estimated gain for a tip-curve duration of several minutes or less. This gain error causes a 3-mm bias and a 1-mm scale factor error in the estimated path delay at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor column density present in the troposphere during the astrometric observation. The error caused by WVR beam averaging of tropospheric fluctuations is 3 mm at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor (and is proportionally higher for higher water vapor content) for current WVR beamwidths (full width at half maximum of approximately 6 deg). This is a stochastic error (which cannot be calibrated) and which can be reduced to about half of its instantaneous value by time averaging the radio signal over several minutes. The results presented here suggest two improvements to WVR design: first, the gain of the instruments should be stabilized to 4 parts in 10(exp 4) over a calibration period lasting 5 hours, and second, the WVR antenna beamwidth should be reduced to about 0.2 deg. This will reduce the error induced by water vapor fluctuations in the estimated path delays to less than 1 mm for the elevation range from zenith to 6 deg for most observation weather conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y. John
2016-06-15
Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excitedmore » states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy treatment.« less
Mathematics of quantitative kinetic PCR and the application of standard curves.
Rutledge, R G; Côté, C
2003-08-15
Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.
VizieR Online Data Catalog: SNe II light curves & spectra from the CfA (Hicken+, 2017)
NASA Astrophysics Data System (ADS)
Hicken, M.; Friedman, A. S.; Blondin, S.; Challis, P.; Berlind, P.; Calkins, M.; Esquerdo, G.; Matheson, T.; Modjaz, M.; Rest, A.; Kirshner, R. P.
2018-01-01
Since all of the optical photometry reported here was produced as part of the CfA3 and CfA4 processing campaigns, see Hicken+ (2009, J/ApJ/700/331) and Hicken+ (2012, J/ApJS/200/12) for greater details on the instruments, observations, photometry pipeline, calibration, and host-galaxy subtraction used to create the CfA SN II light curves. (8 data files).
Revised landsat-5 thematic mapper radiometric calibration
Chander, G.; Markham, B.L.; Barsi, J.A.
2007-01-01
Effective April 2, 2007, the radiometric calibration of Landsat-5 (L5) Thematic Mapper (TM) data that are processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) will be updated. The lifetime gain model that was implemented on May 5, 2003, for the reflective bands (1-5, 7) will be replaced by a new lifetime radiometric-calibration curve that is derived from the instrument's response to pseudoinvariant desert sites and from cross calibration with the Landsat-7 (L7) Enhanced TM Plus (ETM+). Although this calibration update applies to all archived and future L5 TM data, the principal improvements in the calibration are for the data acquired during the first eight years of the mission (1984-1991), where the changes in the instrument-gain values are as much as 15%. The radiometric scaling coefficients for bands 1 and 2 for approximately the first eight years of the mission have also been changed. Users will need to apply these new coefficients to convert the calibrated data product digital numbers to radiance. The scaling coefficients for the other bands have not changed.
Quantitative X-ray diffraction and fluorescence analysis of paint pigment systems : final report.
DOT National Transportation Integrated Search
1978-01-01
This study attempted to correlate measured X-ray intensities with concentrations of each member of paint pigment systems, thereby establishing calibration curves for the quantitative analyses of such systems.
SU-D-213-06: Dosimetry of Modulated Electron Radiation Therapy Using Fricke Gel Dosimeter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gawad, M Abdel; Elgohary, M; Hassaan, M
Purpose: Modulated electron radiation therapy (MERT) has been proposed as an effective modality for treatment of superficial targets. MERT utilizes multiple beams of different energies which are intensity modulated to deliver optimized dose distribution. Energy independent dosimeters are thus needed for quantitative evaluations of MERT dose distributions and measurements of absolute doses delivered to patients. Thus in the current work we study the feasibility of Fricke gel dosimeters in MERT dosimetry. Methods: Batches of radiation sensitive Fricke gel is fabricated and poured into polymethyl methacrylate cuvettes. The samples were irradiated in solid water phantom and a thick layer of bolusmore » was used as a buildup. A spectrophotometer system was used for measuring the color changes (the absorbance) before and after irradiation and then we calculate net absorbance. We constructed calibration curves to relate the measured absorbance in terms of absorbed dose for all available electron energies. Dosimetric measurements were performed for mixed electron beam delivery and we also performed measurement for segmented field delivery with the dosimeter placed at the junction of two adjacent electron beams of different energies. Dose measured by our gel dosimetry is compared to that calculation from our precise treatment planning system. We also initiated a Monte Carlo study to evaluate the water equivalence of our dosimeters. MCBEAM and MCSIM codes were used for treatment head simulation and phantom dose calculation. PDDs and profiles were calculated for electron beams incident on a phantom designed with 1cm slab of Fricke gel. Results: The calibration curves showed no observed energy dependence with all studied electron beam energies. Good agreement was obtained between dose calculated and that obtained by gel dosimetry. Monte Carlo results illustrated the tissue equivalency of our Gel dosimeters. Conclusion: Fricke Gel dosimeters represent a good option for the dosimetric quality assurance prior to MERT application.« less
Thermal-depth matching in dynamic scene based on affine projection and feature registration
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Jia, Tong; Wu, Chengdong; Li, Yongqiang
2018-03-01
This paper aims to study the construction of 3D temperature distribution reconstruction system based on depth and thermal infrared information. Initially, a traditional calibration method cannot be directly used, because the depth and thermal infrared camera is not sensitive to the color calibration board. Therefore, this paper aims to design a depth and thermal infrared camera calibration board to complete the calibration of the depth and thermal infrared camera. Meanwhile a local feature descriptors in thermal and depth images is proposed. The belief propagation matching algorithm is also investigated based on the space affine transformation matching and local feature matching. The 3D temperature distribution model is built based on the matching of 3D point cloud and 2D thermal infrared information. Experimental results show that the method can accurately construct the 3D temperature distribution model, and has strong robustness.
Calibration and analysis of genome-based models for microbial ecology.
Louca, Stilianos; Doebeli, Michael
2015-10-16
Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.
Airado-Rodríguez, Diego; Høy, Martin; Skaret, Josefine; Wold, Jens Petter
2014-05-01
The potential of multispectral imaging of autofluorescence to map sensory flavour properties and fluorophore concentrations in cod caviar paste has been investigated. Cod caviar paste was used as a case product and it was stored over time, under different headspace gas composition and light exposure conditions, to obtain a relevant span in lipid oxidation and sensory properties. Samples were divided in two sets, calibration and test sets, with 16 and 7 samples, respectively. A third set of samples was prepared with induced gradients in lipid oxidation and sensory properties by light exposure of certain parts of the sample surface. Front-face fluorescence emission images were obtained for excitation wavelength 382 nm at 11 different channels ranging from 400 to 700 nm. The analysis of the obtained sets of images was divided in two parts: First, in an effort to compress and extract relevant information, multivariate curve resolution was applied on the calibration set and three spectral components and their relative concentrations in each sample were obtained. The obtained profiles were employed to estimate the concentrations of each component in the images of the heterogeneous samples, giving chemical images of the distribution of fluorescent oxidation products, protoporphyrin IX and photoprotoporphyrin. Second, regression models for sensory attributes related to lipid oxidation were constructed based on the spectra of homogeneous samples from the calibration set. These models were successfully validated with the test set. The models were then applied for pixel-wise estimation of sensory flavours in the heterogeneous images, giving rise to sensory images. As far as we know this is the first time that sensory images of odour and flavour are obtained based on multispectral imaging. Copyright © 2014 Elsevier B.V. All rights reserved.
Surface family with a common involute asymptotic curve
NASA Astrophysics Data System (ADS)
Bayram, Ergi˙n; Bi˙li˙ci˙, Mustafa
2016-03-01
We construct a surface family possessing an involute of a given curve as an asymptotic curve. We express necessary and sufficient conditions for that curve with the above property. We also present natural results for such ruled surfaces. Finally, we illustrate the method with some examples, e.g. circles and helices as given curves.
The VEPP-2000 electron-positron collider: First experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkaev, D. E., E-mail: D.E.Berkaev@inp.nsk.su; Shwartz, D. B.; Shatunov, P. Yu.
2011-08-15
In 2007, at the Institute of Nuclear Physics (Novosibirsk), the construction of the VEPP-2000 electron-positron collider was completed. The first electron beam was injected into the accelerator structure with turned-off solenoids of the final focus. This mode was used to tune all subsystems of the facility and to train the vacuum chamber using synchrotron radiation at electron currents of up to 150 mA. The VEPP-2000 structure with small beta functions and partially turned-on solenoids was used for the first testing of the 'round beams' scheme at an energy of 508 MeV. Beam-beam effects were studied in strong-weak and strong-strong modes.more » Measurements of the beam sizes in both cases showed a dependence corresponding to model predictions for round colliding beams. Using a modernized SND (spherical neutral detector), the first energy calibration of the VEPP-2000 collider was performed by measuring the excitation curve of the phimeson resonance; the phi-meson mass is known with high accuracy from previous experiments at VEEP-2M. In October 2009, a KMD-3 (cryogenic magnetic detector) was installed at the VEPP-2000 facility, and the physics program with both the SND and LMD-3 particle detectors was started in the energy range of 1-1.9 GeV. This first experimental season was completed in summer 2010 with precision energy calibration by resonant depolarization.« less
Yoon, Donhee; Lee, Dongkun; Lee, Jong-Hyeon; Cha, Sangwon; Oh, Han Bin
2015-01-30
Quantifying polymers by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) with a conventional crystalline matrix generally suffers from poor sample-to-sample or shot-to-shot reproducibility. An ionic-liquid matrix has been demonstrated to mitigate these reproducibility issues by providing a homogeneous sample surface, which is useful for quantifying polymers. In the present study, we evaluated the use of an ionic liquid matrix, i.e., 1-methylimidazolium α-cyano-4-hydroxycinnamate (1-MeIm-CHCA), to quantify polyhexamethylene guanidine (PHMG) samples that impose a critical health hazard when inhaled in the form of droplets. MALDI-TOF mass spectra were acquired for PHMG oligomers using a variety of ionic-liquid matrices including 1-MeIm-CHCA. Calibration curves were constructed by plotting the sum of the PHMG oligomer peak areas versus PHMG sample concentration with a variety of peptide internal standards. Compared with the conventional crystalline matrix, the 1-MeIm-CHCA ionic-liquid matrix had much better reproducibility (lower standard deviations). Furthermore, by using an internal peptide standard, good linear calibration plots could be obtained over a range of PMHG concentrations of at least 4 orders of magnitude. This study successfully demonstrated that PHMG samples can be quantitatively characterized by MALDI-TOFMS with an ionic-liquid matrix and an internal standard. Copyright © 2014 John Wiley & Sons, Ltd.
Quantification of febuxostat polymorphs using powder X-ray diffraction technique.
Qiu, Jing-bo; Li, Gang; Sheng, Yue; Zhu, Mu-rong
2015-03-25
Febuxostat is a pharmaceutical compound with more than 20 polymorphs of which form A is most widely used and usually exists in a mixed polymorphic form with form G. In the present study, a quantification method for polymorphic form A and form G of febuxostat (FEB) has been developed using powder X-ray diffraction (PXRD). Prior to development of a quantification method, pure polymorphic form A and form G are characterized. A continuous scan with a scan rate of 3° min(-1) over an angular range of 3-40° 2θ is applied for the construction of the calibration curve using the characteristic peaks of form A at 12.78° 2θ (I/I0100%) and form G at 11.72° 2θ (I/I0100%). The linear regression analysis data for the calibration plots shows good linear relationship with R(2)=0.9985 with respect to peak area in the concentration range 10-60 wt.%. The method is validated for precision, recovery and ruggedness. The limits of detection and quantitation are 1.5% and 4.6%, respectively. The obtained results prove that the method is repeatable, sensitive and accurate. The proposed developed PXRD method can be applied for the quantitative analysis of mixtures of febuxostat polymorphs (forms A and G). Copyright © 2015 Elsevier B.V. All rights reserved.
Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H
2016-05-30
For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
On the relationship between NMR-derived amide order parameters and protein backbone entropy changes
Sharp, Kim A.; O’Brien, Evan; Kasinath, Vignesh; Wand, A. Joshua
2015-01-01
Molecular dynamics simulations are used to analyze the relationship between NMR-derived squared generalized order parameters of amide NH groups and backbone entropy. Amide order parameters (O2NH) are largely determined by the secondary structure and average values appear unrelated to the overall flexibility of the protein. However, analysis of the more flexible subset (O2NH < 0.8) shows that these report both on the local flexibility of the protein and on a different component of the conformational entropy than that reported by the side chain methyl axis order parameters, O2axis. A calibration curve for backbone entropy vs. O2NH is developed which accounts for both correlations between amide group motions of different residues, and correlations between backbone and side chain motions. This calibration curve can be used with experimental values of O2NH changes obtained by NMR relaxation measurements to extract backbone entropy changes, e.g. upon ligand binding. In conjunction with our previous calibration for side chain entropy derived from measured O2axis values this provides a prescription for determination of the total protein conformational entropy changes from NMR relaxation measurements. PMID:25739366
On the relationship between NMR-derived amide order parameters and protein backbone entropy changes.
Sharp, Kim A; O'Brien, Evan; Kasinath, Vignesh; Wand, A Joshua
2015-05-01
Molecular dynamics simulations are used to analyze the relationship between NMR-derived squared generalized order parameters of amide NH groups and backbone entropy. Amide order parameters (O(2) NH ) are largely determined by the secondary structure and average values appear unrelated to the overall flexibility of the protein. However, analysis of the more flexible subset (O(2) NH < 0.8) shows that these report both on the local flexibility of the protein and on a different component of the conformational entropy than that reported by the side chain methyl axis order parameters, O(2) axis . A calibration curve for backbone entropy vs. O(2) NH is developed, which accounts for both correlations between amide group motions of different residues, and correlations between backbone and side chain motions. This calibration curve can be used with experimental values of O(2) NH changes obtained by NMR relaxation measurements to extract backbone entropy changes, for example, upon ligand binding. In conjunction with our previous calibration for side chain entropy derived from measured O(2) axis values this provides a prescription for determination of the total protein conformational entropy changes from NMR relaxation measurements. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zafar, Sufi; Lu, Minhua; Jagtiani, Ashish
2017-01-01
Field effect transistors (FET) have been widely used as transducers in electrochemical sensors for over 40 years. In this report, a FET transducer is compared with the recently proposed bipolar junction transistor (BJT) transducer. Measurements are performed on two chloride electrochemical sensors that are identical in all details except for the transducer device type. Comparative measurements show that the transducer choice significantly impacts the electrochemical sensor characteristics. Signal to noise ratio is 20 to 2 times greater for the BJT sensor. Sensitivity is also enhanced: BJT sensing signal changes by 10 times per pCl, whereas the FET signal changes by 8 or less times. Also, sensor calibration curves are impacted by the transducer choice. Unlike a FET sensor, the calibration curve of the BJT sensor is independent of applied voltages. Hence, a BJT sensor can make quantitative sensing measurements with minimal calibration requirements, an important characteristic for mobile sensing applications. As a demonstration for mobile applications, these BJT sensors are further investigated by measuring chloride levels in artificial human sweat for potential cystic fibrosis diagnostic use. In summary, the BJT device is demonstrated to be a superior transducer in comparison to a FET in an electrochemical sensor.
The role of a microDiamond detector in the dosimetry of proton pencil beams.
Gomà, Carles; Marinelli, Marco; Safai, Sairos; Verona-Rinati, Gianluca; Würfel, Jan
2016-03-01
In this work, the performance of a microDiamond detector in a scanned proton beam is studied and its potential role in the dosimetric characterization of proton pencil beams is assessed. The linearity of the detector response with the absorbed dose and the dependence on the dose-rate were tested. The depth-dose curve and the lateral dose profiles of a proton pencil beam were measured and compared to reference data. The feasibility of calibrating the beam monitor chamber with a microDiamond detector was also studied. It was found the detector reading is linear with the absorbed dose to water (down to few cGy) and the detector response is independent of both the dose-rate (up to few Gy/s) and the proton beam energy (within the whole clinically-relevant energy range). The detector showed a good performance in depth-dose curve and lateral dose profile measurements; and it might even be used to calibrate the beam monitor chambers-provided it is cross-calibrated against a reference ionization chamber. In conclusion, the microDiamond detector was proved capable of performing an accurate dosimetric characterization of proton pencil beams. Copyright © 2015. Published by Elsevier GmbH.
Zafar, Sufi; Lu, Minhua; Jagtiani, Ashish
2017-01-01
Field effect transistors (FET) have been widely used as transducers in electrochemical sensors for over 40 years. In this report, a FET transducer is compared with the recently proposed bipolar junction transistor (BJT) transducer. Measurements are performed on two chloride electrochemical sensors that are identical in all details except for the transducer device type. Comparative measurements show that the transducer choice significantly impacts the electrochemical sensor characteristics. Signal to noise ratio is 20 to 2 times greater for the BJT sensor. Sensitivity is also enhanced: BJT sensing signal changes by 10 times per pCl, whereas the FET signal changes by 8 or less times. Also, sensor calibration curves are impacted by the transducer choice. Unlike a FET sensor, the calibration curve of the BJT sensor is independent of applied voltages. Hence, a BJT sensor can make quantitative sensing measurements with minimal calibration requirements, an important characteristic for mobile sensing applications. As a demonstration for mobile applications, these BJT sensors are further investigated by measuring chloride levels in artificial human sweat for potential cystic fibrosis diagnostic use. In summary, the BJT device is demonstrated to be a superior transducer in comparison to a FET in an electrochemical sensor. PMID:28134275
Lanfear, David E; Levy, Wayne C; Stehlik, Josef; Estep, Jerry D; Rogers, Joseph G; Shah, Keyur B; Boyle, Andrew J; Chuang, Joyce; Farrar, David J; Starling, Randall C
2017-05-01
Timing of left ventricular assist device (LVAD) implantation in advanced heart failure patients not on inotropes is unclear. Relevant prediction models exist (SHFM [Seattle Heart Failure Model] and HMRS [HeartMate II Risk Score]), but use in this group is not established. ROADMAP (Risk Assessment and Comparative Effectiveness of Left Ventricular Assist Device and Medical Management in Ambulatory Heart Failure Patients) is a prospective, multicenter, nonrandomized study of 200 advanced heart failure patients not on inotropes who met indications for LVAD implantation, comparing the effectiveness of HeartMate II support versus optimal medical management. We compared SHFM-predicted versus observed survival (overall survival and LVAD-free survival) in the optimal medical management arm (n=103) and HMRS-predicted versus observed survival in all LVAD patients (n=111) using Cox modeling, receiver-operator characteristic (ROC) curves, and calibration plots. In the optimal medical management cohort, the SHFM was a significant predictor of survival (hazard ratio=2.98; P <0.001; ROC area under the curve=0.71; P <0.001) but not LVAD-free survival (hazard ratio=1.41; P =0.097; ROC area under the curve=0.56; P =0.314). SHFM showed adequate calibration for survival but overestimated LVAD-free survival. In the LVAD cohort, the HMRS had marginal discrimination at 3 (Cox P =0.23; ROC area under the curve=0.71; P =0.026) and 12 months (Cox P =0.036; ROC area under the curve=0.62; P =0.122), but calibration was poor, underestimating survival across time and risk subgroups. In non-inotrope-dependent advanced heart failure patients receiving optimal medical management, the SHFM was predictive of overall survival but underestimated the risk of clinical worsening and LVAD implantation. Among LVAD patients, the HMRS had marginal discrimination and underestimated survival post-LVAD implantation. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01452802. © 2017 American Heart Association, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, DA; Sahoo, N; Sawakuchi, GO
Purpose: To investigate the use of optically stimulated luminescence (OSL) detectors (OSLDs) for measurements of dose-averaged linear energy transfer (LET) in patient-specific proton therapy treatment fields. Methods: We used Al{sub 2}O{sub 3}:C OSLDs made from the same material as commercially available nanoDot OSLDs from Landauer, Inc. We calibrated two parameters of the OSL signal as functions of LET in therapeutic proton beams: the ratio of the ultraviolet and blue emission intensities (UV/blue ratio) and the OSL curve shape. These calibration curves were created by irradiating OSLDs in passively scattered beams of known LET (0.96 to 3.91 keV/µm). The LET valuesmore » were determined using a validated Monte Carlo model of the beamline. We then irradiated new OSLDs with the prescription dose (16 to 74 cGy absorbed dose to water) at the center of the spread-out Bragg peak (SOBP) of four patient-specific treatment fields. From readouts of these OSLDs, we determined both the UV/blue ratio and OSL curve shape parameters. Combining these parameters with the calibration curves, we were able to measure LET using the OSLDs. The measurements were compared to the theoretical LET values obtained from Monte Carlo simulations of the patient-specific treatments fields. Results: Using the UV/blue ratio parameter, we were able to measure LET within 3.8%, 6.2%, 5.6% and 8.6% of the Monte Carlo value for each of the patient fields. Similarly, using the OSL curve shape parameter, LET measurements agreed within 0.5%, 11.0%, 2.5% and 7.6% for each of the four fields. Conclusion: We have demonstrated a method to verify LET in patient-specific proton therapy treatment fields using OSLDs. The possibility of enhancing biological effectiveness of proton therapy treatment plans by including LET in the optimization has been previously shown. The LET verification method we have demonstrated will be useful in the quality assurance of such LET optimized treatment plans. DA Granville received financial support from the Natural Sciences and Engineering Research Council of Canada.« less
A distance-independent calibration of the luminosity of type Ia supernovae and the Hubble constant
NASA Technical Reports Server (NTRS)
Leibundgut, Bruno; Pinto, Philip A.
1992-01-01
The absolute magnitude of SNe Ia at maximum is calibrated here using radioactive decay models for the light curve and a minimum of assumptions. The absolute magnitude parameter space is studied using explosion models and a range of rise times, and absolute B magnitudes at maximum are used to derive a range of the H0 and the distance to the Virgo Cluster from SNe Ia. Rigorous limits for H0 of 45 and 105 km/s/Mpc are derived.
The Calibration of the Slotted Section for Precision Microwave Measurements
1952-03-01
Calibration Curve for lossless Structures B. The Correction Relations for Dis’sipative Structures C The Effect of an Error in the Variable Short...a’discussipn of protoe effects ? and a methpd of correction? for large insertion depths are given in the literature-* xhrs. reppirt is _ cpnceraed...solely with error source fcp)v *w w«v 3Jhe: presence of the slot in the slptted section Intro dub« effects ? fa)" the slot, loads the vmyeguide
NASA Astrophysics Data System (ADS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1992-10-01
Three aspects of a polarimetric active radar calibrator (PARC) are treated: (1) experimental measurements of the magnitudes and phases of the scattering-matrix elements of a pair of PARCs operating at 1.25 and 5.3 GHz; (2) the design, construction, and performance evaluation of a PARC; and (3) the extension of the single-target-calibration technique (STCT) to a PARC. STCT has heretofore been limited to the use of reciprocal passive calibration devices, such as spheres and trihedral corner reflectors.
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1992-01-01
Three aspects of a polarimetric active radar calibrator (PARC) are treated: (1) experimental measurements of the magnitudes and phases of the scattering-matrix elements of a pair of PARCs operating at 1.25 and 5.3 GHz; (2) the design, construction, and performance evaluation of a PARC; and (3) the extension of the single-target-calibration technique (STCT) to a PARC. STCT has heretofore been limited to the use of reciprocal passive calibration devices, such as spheres and trihedral corner reflectors.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2016-05-01
The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.
NASA Technical Reports Server (NTRS)
Usry, J. W.; Whitlock, C. H.
1981-01-01
Management of water resources such as a reservoir requires using analytical models which describe such parameters as the suspended sediment field. To select or develop an appropriate model requires making many measurements to describe the distribution of this parameter in the water column. One potential method for making those measurements expeditiously is to measure light transmission or turbidity and relate that parameter to total suspended solids concentrations. An instrument which may be used for this purpose was calibrated by generating curves of transmission measurements plotted against measured values of total suspended solids concentrations and beam attenuation coefficients. Results of these experiments indicate that field measurements made with this instrument using curves generated in this study should correlate with total suspended solids concentrations and beam attenuation coefficients in the water column within 20 percent.
Magnetic nanoparticle temperature estimation.
Weaver, John B; Rauwerdink, Adam M; Hansen, Eric W
2009-05-01
The authors present a method of measuring the temperature of magnetic nanoparticles that can be adapted to provide in vivo temperature maps. Many of the minimally invasive therapies that promise to reduce health care costs and improve patient outcomes heat tissue to very specific temperatures to be effective. Measurements are required because physiological cooling, primarily blood flow, makes the temperature difficult to predict a priori. The ratio of the fifth and third harmonics of the magnetization generated by magnetic nanoparticles in a sinusoidal field is used to generate a calibration curve and to subsequently estimate the temperature. The calibration curve is obtained by varying the amplitude of the sinusoidal field. The temperature can then be estimated from any subsequent measurement of the ratio. The accuracy was 0.3 degree K between 20 and 50 degrees C using the current apparatus and half-second measurements. The method is independent of nanoparticle concentration and nanoparticle size distribution.
Chandra Observations of SN 1987A: The Soft X-Ray Light Curve Revisited
NASA Technical Reports Server (NTRS)
Helder, E. A.; Broos, P. S.; Dewey, D.; Dwek, E.; McCray, R.; Park, S.; Racusin, J. L.; Zhekov, S. A.; Burrows, D. N.
2013-01-01
We report on the present stage of SN 1987A as observed by the Chandra X-Ray Observatory. We reanalyze published Chandra observations and add three more epochs of Chandra data to get a consistent picture of the evolution of the X-ray fluxes in several energy bands. We discuss the implications of several calibration issues for Chandra data. Using the most recent Chandra calibration files, we find that the 0.5-2.0 keV band fluxes of SN 1987A have increased by approximately 6 x 10(exp-13) erg s(exp-1)cm(exp-2) per year since 2009. This is in contrast with our previous result that the 0.5-2.0 keV light curve showed a sudden flattening in 2009. Based on our new analysis, we conclude that the forward shock is still in full interaction with the equatorial ring.
NASA Technical Reports Server (NTRS)
Moses, J. Daniel
1989-01-01
Three improvements in photographic x-ray imaging techniques for solar astronomy are presented. The testing and calibration of a new film processor was conducted; the resulting product will allow photometric development of sounding rocket flight film immediately upon recovery at the missile range. Two fine grained photographic films were calibrated and flight tested to provide alternative detector choices when the need for high resolution is greater than the need for high sensitivity. An analysis technique used to obtain the characteristic curve directly from photographs of UV solar spectra were applied to the analysis of soft x-ray photographic images. The resulting procedure provides a more complete and straightforward determination of the parameters describing the x-ray characteristic curve than previous techniques. These improvements fall into the category of refinements instead of revolutions, indicating the fundamental suitability of the photographic process for x-ray imaging in solar astronomy.
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
Activities and Updates at the State Time and Frequency Standard of Russia
2009-11-01
differentially calibrated relative to a portable receiver as part of a calibration campaign arranged by the BIPM. A TWSTFT station is under construction in...with GNSS techniques, a TWSTFT station is under construction right now in Mendeleevo. The closest main goal is to arrange a time link to PTB and NICT...via the IS-4 satellite at a 1-ns accuracy level and improve considerably our time link to UTC. The next possible place for a TWSTFT station is
Response of TLD-100 in mixed fields of photons and electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawless, Michael J.; Junell, Stephanie; Hammer, Cliff
Purpose: Thermoluminescent dosimeters (TLDs) are routinely used for dosimetric measurements of high energy photon and electron fields. However, TLD response in combined fields of photon and electron beam qualities has not been characterized. This work investigates the response of TLD-100 (LiF:Mg,Ti) to sequential irradiation by high-energy photon and electron beam qualities. Methods: TLDs were irradiated to a known dose by a linear accelerator with a 6 MV photon beam, a 6 MeV electron beam, and a NIST-traceable {sup 60}Co beam. TLDs were also irradiated in a mixed field of the 6 MeV electron beam and the 6 MV photon beam.more » The average TLD response per unit dose of the TLDs for each linac beam quality was normalized to the average response per unit dose of the TLDs irradiated by the {sup 60}Co beam. Irradiations were performed in water and in a Virtual Water Trade-Mark-Sign phantom. The 6 MV photon beam and 6 MeV electron beam were used to create dose calibration curves relating TLD response to absorbed dose to water, which were applied to the TLDs irradiated in the mixed field. Results: TLD relative response per unit dose in the mixed field was less sensitive than the relative response in the photon field and more sensitive than the relative response in the electron field. Application of the photon dose calibration curve to the TLDs irradiated in a mixed field resulted in an underestimation of the delivered dose, while application of the electron dose calibration curve resulted in an overestimation of the dose. Conclusions: The relative response of TLD-100 in mixed fields fell between the relative response in the photon-only and electron-only fields. TLD-100 dosimetry of mixed fields must account for this intermediate response to minimize the estimation errors associated with calibration factors obtained from a single beam quality.« less
Response of TLD-100 in mixed fields of photons and electrons.
Lawless, Michael J; Junell, Stephanie; Hammer, Cliff; DeWerd, Larry A
2013-01-01
Thermoluminescent dosimeters (TLDs) are routinely used for dosimetric measurements of high energy photon and electron fields. However, TLD response in combined fields of photon and electron beam qualities has not been characterized. This work investigates the response of TLD-100 (LiF:Mg,Ti) to sequential irradiation by high-energy photon and electron beam qualities. TLDs were irradiated to a known dose by a linear accelerator with a 6 MV photon beam, a 6 MeV electron beam, and a NIST-traceable (60)Co beam. TLDs were also irradiated in a mixed field of the 6 MeV electron beam and the 6 MV photon beam. The average TLD response per unit dose of the TLDs for each linac beam quality was normalized to the average response per unit dose of the TLDs irradiated by the (60)Co beam. Irradiations were performed in water and in a Virtual Water™ phantom. The 6 MV photon beam and 6 MeV electron beam were used to create dose calibration curves relating TLD response to absorbed dose to water, which were applied to the TLDs irradiated in the mixed field. TLD relative response per unit dose in the mixed field was less sensitive than the relative response in the photon field and more sensitive than the relative response in the electron field. Application of the photon dose calibration curve to the TLDs irradiated in a mixed field resulted in an underestimation of the delivered dose, while application of the electron dose calibration curve resulted in an overestimation of the dose. The relative response of TLD-100 in mixed fields fell between the relative response in the photon-only and electron-only fields. TLD-100 dosimetry of mixed fields must account for this intermediate response to minimize the estimation errors associated with calibration factors obtained from a single beam quality.
LAMOST Spectrograph Response Curves: Stability and Application to Flux Calibration
NASA Astrophysics Data System (ADS)
Du, Bing; Luo, A.-Li; Kong, Xiao; Zhang, Jian-Nan; Guo, Yan-Xin; Cook, Neil James; Hou, Wen; Yang, Hai-Feng; Li, Yin-Bi; Song, Yi-Han; Chen, Jian-Jun; Zuo, Fang; Wu, Ke-Fei; Wang, Meng-Xin; Wu, Yue; Wang, You-Fen; Zhao, Yong-Heng
2016-12-01
The task of flux calibration for Large sky Area Multi-Object Spectroscopic Telescope (LAMOST) spectra is difficult due to many factors, such as the lack of standard stars, flat-fielding for large field of view, and variation of reddening between different stars, especially at low Galactic latitudes. Poor selection, bad spectral quality, or extinction uncertainty of standard stars not only might induce errors to the calculated spectral response curve (SRC) but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude | b| ≥slant 60^\\circ and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee that the selected stars had been observed by each fiber, we selected 37,931 high-quality exposures of 29,000 stars from LAMOST DR2, and more than seven exposures for each fiber. We calculated the SRCs for each fiber for each exposure and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable, with statistical errors ≤10%. From the comparison between each ASPSRC and the SRCs for the same spectrograph obtained by the 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra that were abandoned by the LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with the Sloan Digital Sky Survey (SDSS), the relative flux differences between SDSS spectra and LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration.
Liao, C; Peng, Z Y; Li, J B; Cui, X W; Zhang, Z H; Malakar, P K; Zhang, W J; Pan, Y J; Zhao, Y
2015-03-01
The aim of this study was to simultaneously construct PCR-DGGE-based predictive models of Listeria monocytogenes and Vibrio parahaemolyticus on cooked shrimps at 4 and 10°C. Calibration curves were established to correlate peak density of DGGE bands with microbial counts. Microbial counts derived from PCR-DGGE and plate methods were fitted by Baranyi model to obtain molecular and traditional predictive models. For L. monocytogenes, growing at 4 and 10°C, molecular predictive models were constructed. It showed good evaluations of correlation coefficients (R(2) > 0.92), bias factors (Bf ) and accuracy factors (Af ) (1.0 ≤ Bf ≤ Af ≤ 1.1). Moreover, no significant difference was found between molecular and traditional predictive models when analysed on lag phase (λ), maximum growth rate (μmax ) and growth data (P > 0.05). But for V. parahaemolyticus, inactivated at 4 and 10°C, molecular models show significant difference when compared with traditional models. Taken together, these results suggest that PCR-DGGE based on DNA can be used to construct growth models, but it is inappropriate for inactivation models yet. This is the first report of developing PCR-DGGE to simultaneously construct multiple molecular models. It has been known for a long time that microbial predictive models based on traditional plate methods are time-consuming and labour-intensive. Denaturing gradient gel electrophoresis (DGGE) has been widely used as a semiquantitative method to describe complex microbial community. In our study, we developed DGGE to quantify bacterial counts and simultaneously established two molecular predictive models to describe the growth and survival of two bacteria (Listeria monocytogenes and Vibrio parahaemolyticus) at 4 and 10°C. We demonstrated that PCR-DGGE could be used to construct growth models. This work provides a new approach to construct molecular predictive models and thereby facilitates predictive microbiology and QMRA (Quantitative Microbial Risk Assessment). © 2014 The Society for Applied Microbiology.
Construction of Joule Thomson inversion curves for mixtures using equation of state
NASA Astrophysics Data System (ADS)
Patankar, A. S.; Atrey, M. D.
2017-02-01
The Joule-Thomson effect is at the heart of Joule-Thomson cryocoolers and gas liquefaction cycles. The effective harnessing of this phenomenon necessitates the knowledge of Joule-Thomson coefficient and the inversion curve. When the working fluid is a mixture, (in mix refrigerant Joule-Thomson cryocooler, MRJT) the phase diagrams, equations of state and inversion curves of multi-component systems become important. The lowest temperature attainable by such a cryocooler depends on the inversion characteristics of the mixture used. In this work the construction of differential Joule-Thomson inversion curves of mixtures using Redlich-Kwong, Soave-Redlich-Kwong and Peng-Robinson equations of state is investigated assuming single phase. It is demonstrated that inversion curves constructed for pure fluids can be improved by choosing an appropriate value of acentric factor. Inversion curves are used to predict maximum inversion temperatures of multicomponent systems. An application where this information is critical is a two-stage J-T cryocooler using a mixture as the working fluid, especially for the second stage. The pre-cooling temperature that the first stage is required to generate depends on the maximum inversion temperature of the second stage working fluid.
Connections between survey calibration estimators and semiparametric models for incomplete data
Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.
2012-01-01
Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390
Spectral Irradiance Calibration in the Infrared. 4; 1.2-35um Spectra of Six Standard Stars
NASA Technical Reports Server (NTRS)
Cohen, Martin; Witteborn, Fred C.; Walker, Russell G.; Bregman, Jesse D.; Wooden, Diane H.
1995-01-01
We present five new absolutely calibrated continuous stellar spectra from 1.2 to 35 microns, constructed as far as possible from actual observed spectral fragments taken from the ground, the Kuiper Airborne Observatory (KAO), and the IRAS Low Resolution Spectrometer (LRS). These stars, Beta Peg, Delta Boo, Beta And, Beta Gem, and Delta Hya, augment our already created complete absolutely calibrated spectrum for a Tau. All these spectra have a common calibration pedigree. The wavelength coverage is ideal for calibration of many existing and proposed ground-based, airborne, and satellite sensors.
NASA Technical Reports Server (NTRS)
Cohen, Martin; Witteborn, Fred C.; Walker, Russell G.; Bregman, Jesse D.; Wooden, Diane H.
1995-01-01
We present five new absolutely calibrated continuous stellar spectra from 1.2 to 35 microns, constructed as far as possible from actual observed spectral fragments taken from the ground, the Kuiper Airborne Observatory (KAO), and the IRAS Low Resolution Spectrometer (LRS). These stars- beta Peg, alpha Boo, beta And, beta Gem, and alpha Hya-augment our already created complete absolutely calibrated spectrum for alpha Tau. All these spectra have a common calibration pedigree. The wavelength coverage is ideal for calibration of many existing and proposed ground-based, airborne, and satellite sensors.
Yarita, Takashi; Aoyagi, Yoshie; Otake, Takamitsu
2015-05-29
The impact of the matrix effect in GC-MS quantification of pesticides in food using the corresponding isotope-labeled internal standards was evaluated. A spike-and-recovery study of nine target pesticides was first conducted using paste samples of corn, green soybean, carrot, and pumpkin. The observed analytical values using isotope-labeled internal standards were more accurate for most target pesticides than that obtained using the external calibration method, but were still biased from the spiked concentrations when a matrix-free calibration solution was used for calibration. The respective calibration curves for each target pesticide were also prepared using matrix-free calibration solutions and matrix-matched calibration solutions with blank soybean extract. The intensity ratio of the peaks of most target pesticides to that of the corresponding isotope-labeled internal standards was influenced by the presence of the matrix in the calibration solution; therefore, the observed slope varied. The ratio was also influenced by the type of injection method (splitless or on-column). These results indicated that matrix-matching of the calibration solution is required for very accurate quantification, even if isotope-labeled internal standards were used for calibration. Copyright © 2015 Elsevier B.V. All rights reserved.
The Fermilab Muon g-2 experiment: laser calibration system
Karuza, M.; Anastasi, A.; Basti, A.; ...
2017-08-17
The anomalous muon dipole magnetic moment can be measured (and calculated) with great precision thus providing insight on the Standard Model and new physics. Currently an experiment is under construction at Fermilab (U.S.A.) which is expected to measure the anomalous muon dipole magnetic moment with unprecedented precision. One of the improvements with respect to the previous experiments is expected to come from the laser calibration system which has been designed and constructed by the Italian part of the collaboration (INFN). Furthermore, an emphasis of this paper will be on the calibration system that is in the final stages of constructionmore » as well as the experiment which is expected to start data taking this year.« less
Satellite Calibration With LED Detectors at Mud Lake
NASA Technical Reports Server (NTRS)
Hiller, Jonathan D.
2005-01-01
Earth-monitoring instruments in orbit must be routinely calibrated in order to accurately analyze the data obtained. By comparing radiometric measurements taken on the ground in conjunction with a satellite overpass, calibration curves are derived for an orbiting instrument. A permanent, automated facility is planned for Mud Lake, Nevada (a large, homogeneous, dry lakebed) for this purpose. Because some orbiting instruments have low resolution (250 meters per pixel), inexpensive radiometers using LEDs as sensors are being developed to array widely over the lakebed. LEDs are ideal because they are inexpensive, reliable, and sense over a narrow bandwidth. By obtaining and averaging widespread data, errors are reduced and long-term surface changes can be more accurately observed.
Results of the 1996 JPL Balloon Flight Solar Cell Calibration Program
NASA Technical Reports Server (NTRS)
Anspaugh, B. E.; Weiss, R. S.
1996-01-01
The 1996 solar cell calibration balloon flight campaign was completed with the first flight on June 30, 1996 and a second flight on August 8, 1996. All objectives of the flight program were met. Sixty-four modules were carried to an altitude of 120,000 ft (36.6 km). Full 1-5 curves were measured on 22 of these modules, and output at a fixed load was measured on 42 modules. This data was corrected to 28 C and to 1 AU (1.496 x 10(exp 8) km). The calibrated cells have been returned to the participants and can now be used as reference standards in simulator testing of cells and arrays.
Lee, K R; Dipaolo, B; Ji, X
2000-06-01
Calibration is the process of fitting a model based on reference data points (x, y), then using the model to estimate an unknown x based on a new measured response, y. In DNA assay, x is the concentration, and y is the measured signal volume. A four-parameter logistic model was used frequently for calibration of immunoassay when the response is optical density for enzyme-linked immunosorbent assay (ELISA) or adjusted radioactivity count for radioimmunoassay (RIA). Here, it is shown that the same model or a linearized version of the curve are equally useful for the calibration of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs and calculation of performance measures of the assay.
Concave Surround Optics for Rapid Multi-View Imaging
2006-11-01
thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically
Müller, Christoph; Vetter, Florian; Richter, Elmar; Bracher, Franz
2014-02-01
The occurrence of the bioactive components caffeine (xanthine alkaloid), myosmine and nicotine (pyridine alkaloids) in different edibles and plants is well known, but the content of myosmine and nicotine is still ambiguous in milk/dark chocolate. Therefore, a sensitive method for determination of these components was established, a simple separation of the dissolved analytes from the matrix, followed by headspace solid-phase microextraction coupled with gas chromatography-tandem mass spectrometry (HS-SPME-GC-MS/MS). This is the first approach for simultaneous determination of caffeine, myosmine, and nicotine with a convenient SPME technique. Calibration curves were linear for the xanthine alkaloid (250 to 3000 mg/kg) and the pyridine alkaloids (0.000125 to 0.003000 mg/kg). Residuals of the calibration curves were lower than 15%, hence the limits of detection were set as the lowest points of the calibration curves. The limits of detection calculated from linearity data were for caffeine 216 mg/kg, for myosmine 0.000110 mg/kg, and for nicotine 0.000120 mg/kg. Thirty samples of 5 chocolate brands with varying cocoa contents (30% to 99%) were analyzed in triplicate. Caffeine and nicotine were detected in all samples of chocolate, whereas myosmine was not present in any sample. The caffeine content ranged from 420 to 2780 mg/kg (relative standard deviation 0.1 to 11.5%) and nicotine from 0.000230 to 0.001590 mg/kg (RSD 2.0 to 22.1%). © 2014 Institute of Food Technologists®
THE USE OF QUENCHING IN A LIQUID SCINTILLATION COUNTER FOR QUANTITATIVE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, G.V.
1963-01-01
Quenching was used to quantitatively determine the amonnt of quenching agent present. A sealed promethium147 source was prepared to be used for the count rate determinations. Two methods to determine the amount of quenching agent present in a sample were developed. One method related the count rate of a sample containing a quenching agent to the amount of quenching agent present. Calibration curves were plotted using both color and chemical quenchers. The quenching agents used were: F.D.C. Orange No. 2, F.D.C. Yellow No. 3, F.D.C. Yellow No. 4, Scarlet Red, acetone, benzaldehyde, and carbon tetrachloride. the color quenchers gave amore » linear-relationship, while the chemical quenchers gave a non-linear relationship. Quantities of the color quenchers between about 0.008 mg and 0.100 mg can be determined with an error less than 5%. The calibration curves were found to be usable over a long period of time. The other method related the change in the ratio of the count rates in two voltage windows to the amount of quenching agent present. The quenchers mentioned above were used. Calibration curves were plotted for both the color and chemical quenchers. The relationships of ratio versus amount of quencher were non-linear in each case. It was shown that the reproducibility of the count rate and the ratio was independent of the amount of quencher present but was dependent on the count rate. At count rates above 10,000 counts per minute the reproducibility was better than 1%. (TCO)« less
Spectral Measurement of Watershed Coefficients in the Southern Great Plains
NASA Technical Reports Server (NTRS)
Blanchard, B. J. (Principal Investigator); Bausch, W.
1978-01-01
The author has identified the following significant results. It was apparent that the spectra calibration of runoff curve numbers cannot be achieved on watersheds where significant areas of timber were within the drainage area. The absorption of light by wet soil conditions restricts differentiation of watersheds with regard to watershed runoff curve numbers. It appeared that the predominant factor influencing the classification of watershed runoff curve numbers was the difference in soil color and its associated reflectance when dry. In regions where vegetation grown throughout the year, where wet surface conditions prevail or where watersheds are timbered, there is little hope of classifying runoff potential with visible light alone.
NASA Astrophysics Data System (ADS)
Pan, Feifei; Wang, Cheng; Xi, Xiaohuan
2016-09-01
Remote sensing from satellites and airborne platforms provides valuable data for monitoring and gauging river discharge. One effective approach first estimates river stage from satellite-measured inundation area based on the inundation area-river stage relationship (IARSR), and then the estimated river stage is used to compute river discharge based on the stage-discharge rating (SDR) curve. However, this approach is difficult to implement because of a lack of data for constructing the SDR curves. This study proposes a new method to construct the SDR curves using remotely sensed river cross-sectional inundation areas and river bathymetry. The proposed method was tested over a river reach between two USGS gauging stations, i.e., Kingston Mines (KM) and Copperas Creek (CC) along the Illinois River. First a polygon over each of two cross sections was defined. A complete IARSR curve was constructed inside each polygon using digital elevation model (DEM) and river bathymetric data. The constructed IARSR curves were then used to estimate 47 river water surface elevations at each cross section based on 47 river inundation areas estimated from Landsat TM images collected during 1994-2002. The estimated water surface elevations were substituted into an objective function formed by the Bernoulli equation of gradually varied open channel flow. A nonlinear global optimization scheme was applied to solve the Manning's coefficient through minimizing the objective function value. Finally the SDR curve was constructed at the KM site using the solved Manning's coefficient, channel cross sectional geometry and the Manning's equation, and employed to estimate river discharges. The root mean square error (RMSE) in the estimated river discharges against the USGS measured river discharges is 112.4 m3/s. To consider the variation of the Manning's coefficient in the vertical direction, this study also suggested a power-law function to describe the vertical decline of the Manning's coefficient with the water level from the channel bed lowest elevation to the bank-full level. The constructed SDR curve with the vertical variation of the Manning's coefficient reduced the RMSE in the estimated river discharges to 83.9 m3/s. These results indicate that the method developed and tested in this study is effective and robust, and has the potential for improving our ability of remote sensing of river discharge and providing data for water resources management, global water cycle study, and flood forecasting and prevention.
Comet P/Halley 1910, 1986: An objective-prism study
NASA Technical Reports Server (NTRS)
Carsenty, U.; Bus, E. S.; Wyckoff, S.; Lutz, B.
1986-01-01
V. M. Slipher of the Lowell Obs. collected a large amount of spectroscopic data during the 1910 apparition of Halley's comet. Three of his post perihelion objective-prism plates were selected, digitized, and subjected to modern digital data reduction procedures. Some of the important steps in the analysis where: (1) Density to intensity conversion for which was used 1910 slit spectra of Fe-arc lamp on similar plates (Sigma) and derived an average characteristic curve; (2) Flux calibration using the fact that during the period June 2 to 7 1910 P/Halley was very close (angular distance) to the bright star Alpha Sex (A0III, V-4.49), and the spectra of both star and comet were recorded on the same plates. The flux distribution of Alpha Sex was assumed to be similar to that of the standard star 58 Aql and derived a sensitivity curve for the system; (3) Atmospheric extinction using the standard curve for the Lowell Obs.; (4) Solar continuum subtraction using the standard solar spectrum binned to the spectral resolution. An example of a flux-calibrated spectrum of the coma (integrated over 87,000km) before the subtraction of solar continuum is presented.
Xiu, Junshan; Liu, Shiming; Sun, Meiling; Dong, Lili
2018-01-20
The photoelectric performance of metal ion-doped TiO 2 film will be improved with the changing of the compositions and concentrations of additive elements. In this work, the TiO 2 films doped with different Sn concentrations were obtained with the hydrothermal method. Qualitative and quantitative analysis of the Sn element in TiO 2 film was achieved with laser induced breakdown spectroscopy (LIBS) with the calibration curves plotted accordingly. The photoelectric characteristics of TiO 2 films doped with different Sn content were observed with UV visible absorption spectra and J-V curves. All results showed that Sn doping could improve the optical absorption to be red-shifted and advance the photoelectric properties of the TiO 2 films. We had obtained that when the concentration of Sn doping in TiO 2 films was 11.89 mmol/L, which was calculated by the LIBS calibration curves, the current density of the film was the largest, which indicated the best photoelectric performance. It indicated that LIBS was a potential and feasible measured method, which was applied to qualitative and quantitative analysis of the additive element in metal oxide nanometer film.
Biodosimetry of heavy ions by interphase chromosome painting
NASA Astrophysics Data System (ADS)
Durante, M.; Kawata, T.; Nakano, T.; Yamada, S.; Tsujii, H.
1998-11-01
We report measurements of chromosomal aberrations in peripheral blood lymphocytes from cancer patients undergoing radiotherapy treatment. Patients with cervix or esophageal cancer were treated with 10 MV X-rays produced at a LINAC accelerator, or high-energy carbon ions produced at the HIMAC accelerator at the National Institute for Radiological Sciences (NIRS) in Chiba. Blood samples were obtained before, during, and after the radiation treatment. Chromosomes were prematurely condensed by incubation in calyculin A. Aberrations in chromosomes 2 and 4 were scored after fluorescence in situ hybridization with whole-chromosome probes. Pre-treatment samples were exposed in vitro to X-rays, individual dose-response curves for the induction of chromosomal aberrations were determined, and used as calibration curves to calculate the effective whole-body dose absorbed during the treatment. This calculated dose, based on the calibration curve relative to the induction of reciprocal exchanges, has a sharp increase after the first few fractions of the treatment, then saturates at high doses. Although carbon ions are 2-3 times more effective than X-rays in tumor sterilization, the effective dose was similar to that of X-ray treatment. However, the frequency of complex-type chromosomal exchanges was much higher for patients treated with carbon ions than X-ray.
Jankowski, Clémentine; Guiu, S; Cortet, M; Charon-Barra, C; Desmoulins, I; Lorgis, V; Arnould, L; Fumoleau, P; Coudert, B; Rouzier, R; Coutant, C; Reyal, F
2017-01-01
The aim of this study was to assess the Institut Gustave Roussy/M.D. Anderson Cancer Center (IGR/MDACC) nomogram in predicting pathologic complete response (pCR) to preoperative chemotherapy in a cohort of human epidermal growth factor receptor 2 (HER2)-positive tumors treated with preoperative chemotherapy with trastuzumab. We then combine clinical and pathological variables associated with pCR into a new nomogram specific to HER2-positive tumors treated by preoperative chemotherapy with trastuzumab. Data from 270 patients with HER2-positive tumors treated with preoperative chemotherapy with trastuzumab at the Institut Curie and at the Georges François Leclerc Cancer Center were used to assess the IGR/MDACC nomogram and to subsequently develop a new nomogram for pCR based on multivariate logistic regression. Model performance was quantified in terms of calibration and discrimination. We studied the utility of the new nomogram using decision curve analysis. The IGR/MDACC nomogram was not accurate for the prediction of pCR in HER2-positive tumors treated by preoperative chemotherapy with trastuzumab, with poor discrimination (AUC = 0.54, 95% CI 0.51-0.58) and poor calibration (p = 0.01). After uni- and multivariate analysis, a new pCR nomogram was built based on T stage (TNM), hormone receptor status, and Ki67 (%). The model had good discrimination with an area under the curve (AUC) at 0.74 (95% CI 0.70-0.79) and adequate calibration (p = 0.93). By decision curve analysis, the model was shown to be relevant between thresholds of 0.3 and 0.7. To the best of our knowledge, ours is the first nomogram to predict pCR in HER2-positive tumors treated by preoperative chemotherapy with trastuzumab. To ensure generalizability, this model needs to be externally validated.
Van Hooff, Robbert-Jan; Nieboer, Koenraad; De Smedt, Ann; Moens, Maarten; De Deyn, Peter Paul; De Keyser, Jacques; Brouns, Raf
2014-10-01
We evaluated the reliability of eight clinical prediction models for symptomatic intracerebral hemorrhage (sICH) and long-term functional outcome in stroke patients treated with thrombolytics according to clinical practice. In a cohort of 169 patients, 60 patients (35.5%) received IV rtPA according to the European license criteria. The remaining patients received off-label IV rtPA and/or were treated with intra-arterial thrombolysis. We used receiver operator characteristic curves to analyze the discriminative capacity of the MSS score, the HAT score, the SITS SICH score, the SEDAN score and the GRASPS score for sICH according to the NINDS and the ECASSII criteria. Similarly, the discriminative capacity of the s-TPI, the iScore and the DRAGON score were assessed for the modified Rankin Scale (mRS) score at 3 months poststroke. An area under the curve (c-statistic) >0.8 was considered to reflect good discriminative capacity. The reliability of the best performing prediction model was further examined with calibration curves. Separate analyses were performed for patients meeting the European license criteria for IV rtPA and patients outside these criteria. For prediction of sICH c-statistics were 0.66-0.86 and the MMS yielded the best results. For functional outcome c-statistics ranged from 0.72 to 0.86 with s-TPI as best performer. The s-TPI had the lowest absolute error on the calibration curve for predicting excellent outcome (mRS 0-1) and catastrophic outcome (mRS 5-6). All eight clinical models for outcome prediction after thrombolysis for acute ischemic stroke showed fair predictive value in patients treated according daily practice. The s-TPI had the best discriminatory ability and was well calibrated in our study population. Copyright © 2014 Elsevier B.V. All rights reserved.
SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devic, S; Tomic, N; DeBlois, F
2016-06-15
Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response bymore » using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis is inventor and runs a consulting company for radiochromic films.« less
A method for determination of [Fe3+]/[Fe2+] ratio in superparamagnetic iron oxide
NASA Astrophysics Data System (ADS)
Jiang, Changzhao; Yang, Siyu; Gan, Neng; Pan, Hongchun; Liu, Hong
2017-10-01
Superparamagnetic iron oxide nanoparticles (SPION), as a kind of nanophase materials, are widely used in biomedical application, such as magnetic resonance imaging (MRI), drug delivery, and magnetic field assisted therapy. The magnetic property of SPION has close connection with its crystal structure, namely it is related to the ratio of Fe3+ and Fe2+ which form the SPION. So a simple way to determine the content of the Fe3+ and Fe2+ is important for researching the property of SPION. This review covers a method for determination of the Fe3+ and Fe2+ ratio in SPION by UV-vis spectrophotometry based the reaction of Fe2+ and 1,10-phenanthroline. The standard curve of Fe with R2 = 0.9999 is used for determination the content of Fe2+ and total iron with 2.5 mL 0.01% (w/v) SPION digested by HCl, pH = 4.30 HOAc-NaAc buffer 10 mL, 0.01% (w/v) 1,10-phenanthroline 5 mL and 10% (w/v) ascorbic acid 1 mL for total iron determine independently. But the presence of Fe3+ interfere with obtaining the actual value of Fe2+ (the error close to 9%). We designed a calibration curve to eliminate the error by devising a series of solution of different ratio of [Fe3+]/[Fe2+], and obtain the calibration curve. Through the calibration curve, the error between the measured value and the actual value can be reduced to 0.4%. The R2 of linearity of the method is 0.99441 and 0.99929 for Fe2+ and total iron respectively. The error of accuracy of recovery and precision of inter-day and intra-day are both lower than 2%, which can prove the reliability of the determination method.
An implantable transducer for measuring tension in an anterior cruciate ligament graft.
Ventura, C P; Wolchok, J; Hull, M L; Howell, S M
1998-06-01
The goal of this study was to develop a new implantable transducer for measuring anterior cruciate ligament (ACL) graft tension postoperatively in patients who have undergone ACL reconstructive surgery. A unique approach was taken of integrating the transducer into a femoral fixation device. To devise a practical in vivo calibration protocol for the fixation device transducer (FDT), several hypotheses were investigated: (1) The use of a cable versus the actual graft as the means for applying load to the FDT during calibration has no significant effect on the accuracy of the FDT tension measurements; (2) the number of flexion angles at which the device is calibrated has no significant effect on the accuracy of the FDT measurements; (3) the friction between the graft and femoral tunnel has no significant effect on measurement accuracy. To provide data for testing these hypotheses, the FDT was first calibrated with both a cable and a graft over the full range of flexion. Then graft tension was measured simultaneously with both the FDT on the femoral side and load cells, which were connected to the graft on the tibial side, as five cadaver knees were loaded externally. Measurements were made with both standard and overdrilled tunnels. The error in the FDT tension measurements was the difference between the graft tension measured by the FDT and the load cells. Results of the statistical analyses showed that neither the means of applying the calibration load, the number of flexion angles used for calibration, nor the tunnel size had a significant effect on the accuracy of the FDT. Thus a cable may be used instead of the graft to transmit loads to the FDT during calibration, thus simplifying the procedure. Accurate calibration requires data from just three flexion angles of 0, 45, and 90 deg and a curve fit to obtain a calibration curve over a continuous range of flexion within the limits of this angle group. Since friction did not adversely affect the measurement accuracy of the FDT, the femoral tunnel can be drilled to match the diameter of the graft and does not need to be overdrilled. Following these procedures, the error in measuring graft tension with the FDT averages less than 10 percent relative to a full-scale load of 257 N.
Gokduman, Kurtulus; Avsaroglu, M Dilek; Cakiris, Aris; Ustek, Duran; Gurakan, G Candan
2016-03-01
The aim of the current study was to develop, a new, rapid, sensitive and quantitative Salmonella detection method using a Real-Time PCR technique based on an inexpensive, easy to produce, convenient and standardized recombinant plasmid positive control. To achieve this, two recombinant plasmids were constructed as reference molecules by cloning the two most commonly used Salmonella-specific target gene regions, invA and ttrRSBC. The more rapid detection enabled by the developed method (21 h) compared to the traditional culture method (90 h) allows the quantitative evaluation of Salmonella (quantification limits of 10(1)CFU/ml and 10(0)CFU/ml for the invA target and the ttrRSBC target, respectively), as illustrated using milk samples. Three advantages illustrated by the current study demonstrate the potential of the newly developed method to be used in routine analyses in the medical, veterinary, food and water/environmental sectors: I--The method provides fast analyses including the simultaneous detection and determination of correct pathogen counts; II--The method is applicable to challenging samples, such as milk; III--The method's positive controls (recombinant plasmids) are reproducible in large quantities without the need to construct new calibration curves. Copyright © 2016 Elsevier B.V. All rights reserved.
An Empirical Formula From Ion Exchange Chromatography and Colorimetry.
ERIC Educational Resources Information Center
Johnson, Steven D.
1996-01-01
Presents a detailed procedure for finding an empirical formula from ion exchange chromatography and colorimetry. Introduces students to more varied techniques including volumetric manipulation, titration, ion-exchange, preparation of a calibration curve, and the use of colorimetry. (JRH)
Williams, Ammon; Bryce, Keith; Phongikaroon, Supathorn
2017-10-01
Pyroprocessing of used nuclear fuel (UNF) has many advantages-including that it is proliferation resistant. However, as part of the process, special nuclear materials accumulate in the electrolyte salt and present material accountability and safeguards concerns. The main motivation of this work was to explore a laser-induced breakdown spectroscopy (LIBS) approach as an online monitoring technique to enhance the material accountability of special nuclear materials in pyroprocessing. In this work, a vacuum extraction method was used to draw the molten salt (CeCl 3 -GdCl 3 -LiCl-KCl) up into 4 mm diameter Pyrex tubes where it froze. The salt was then removed and the solid salt was measured using LIBS and inductively coupled plasma mass spectroscopy (ICP-MS). A total of 36 samples were made that varied the CeCl 3 and GdCl 3 (surrogates for uranium and plutonium, respectively) concentrations from 0.5 wt% to 5 wt%. From these samples, univariate calibration curves for Ce and Gd were generated using peak area and peak intensity methods. For Ce, the Ce 551.1 nm line using the peak area provided the best calibration curve with a limit of detection (LOD) of 0.099 wt% and a root mean squared error of cross-validation (RMSECV) of 0.197 wt%. For Gd, the best curve was generated using the peak intensities of the Gd 564.2 nm line resulting in a LOD of 0.027 wt% and a RMSECV of 0.295 wt%. The RMSECV for the univariate cases were determined using leave-one-out cross-validation. In addition to the univariate calibration curves, partial least squares (PLS) regression was done to develop a calibration model. The PLS models yielded similar results with RMSECV (determined using Venetian blind cross-validation with 17% left out per split) values of 0.30 wt% and 0.29 wt% for Ce and Gd, respectively. This work has shown that solid pyroprocessing salt can be qualitatively and quantitatively monitored using LIBS. This work has the potential of significantly enhancing the material monitoring and safeguards of special nuclear materials in pyroprocessing.
Microcontroller-based system for estimate of calcium in serum samples.
Neelamegam, Periyaswmy; Jamaludeen, Abdul Sheriff; Ragendran, Annamalai; Murugrananthan, Krishanamoorthy
2010-01-01
In this study, a microcontroller-based control unit was designed and constructed for the estimation of serum calcium in blood samples. The proposed optoelectronic instrument used a red light emitting diode (LED) as a light source and photodiode as a sensor. The performance of the system was compared with that of a commercial instrument in measuring calcium ion. The quantitative analysis of calcium in a catalyst using arsenazo III as colorimetric reagent was used to test the device. The calibration curve for calcium binding with arsenazo III was drawn to check the range of linearity, which was between 0.1 to 4.5 mM L⁻¹. The limit of detection (LOD) is 0.05 mM L⁻¹. Absorbance changes over the pH range of 2-12 were determined to optimize the assay, with maximum absorption at pH 9.0. Interferences in absorbance from monovalent (K+ and Na+) and divalent (Mg²+) cations were also studied. The results show that the system works successfully.
BOREAS TE-4 Gas Exchange Data from Boreal Tree Species
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Curd, Shelaine (Editor); Collatz, G. James; Berry, Joseph A.; Gamon, John; Fredeen, Art; Fu, Wei
2000-01-01
The BOREAS TE-4 team collected steady-state gas exchange and reflectance data from several species in the BOREAS SSA during 1994 and in the NSA during 1996. Measurements of light, CO2, temperature, and humidity response curves were made by the BOREAS TE-4 team during the summers of 1994 and 1996 using intact attached leaves of boreal forest species located in the BOREAS SSA and NSA. These measurements were conducted to calibrate models used to predict photosynthesis, stomatal conductance, and leaf respiration. The 1994 and 1996 data can be used to construct plots of response functions or for parameterizing models. Parameter values are suitable for application in SiB2 (Sellers et al., 1996) or the leaf model of Collatz et al. (1991), and programs can be obtained from the investigators. The data are stored in tabular ASCII files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).
Theodoridis, Georgios
2006-01-18
Protein-drug interactions of seven common pharmaceuticals were studied using solid-phase microextraction (SPME). SPME can be used in such investigations on the condition that no analyte depletion occurs. In multi-compartment systems (e.g. a proteinaceous matrix) only the free portion of the analyte is able to partition into the SPME fiber. In addition if no sample depletion occurs, the bound drug-free drug equilibria are not disturbed. In the present study seven pharmaceuticals (quinine, quinidine, naproxen, ciprofloxacin, haloperidol, paclitaxel and nortriptyline) were assayed by SPME. For quantitative purposes SPME was validated first in the absence of proteins. Calibration curves were constructed for each drug by HPLC-fluorescence and HPLC-UV analysis. SPME was combined to HPLC off-line, desorption occurring in HPLC inserts filled with 200 microL methanol. Binding of each drug to human serum albumin was studied independently. Experimental results were in agreement with literature data and ultrafiltration experiments, indicating the feasibility of the method for such bioanalytical purposes.
FTIR-ATR infrared spectroscopy for the detection of ochratoxin A in dried vine fruit.
Galvis-Sánchez, Andrea C; Barros, Antonio; Delgadillo, Ivonne
2007-11-01
A method of screening sultanas for ochratoxin A (OTA) contamination, using mid-infrared spectroscopy/Golden Gate single-reflection ATR (attenuated total reflection), is described. The main spectral characteristics of sultanas from different sources were identified in a preliminary acquisition and spectral analysis study. Principal component analysis (PCA) showed that samples of various origins had different spectral characteristics, especially in water content and the fingerprint region. A lack of reproducibility was observed in the spectra acquired on different days. However, spectral repeatability was greatly improved when water activity of the sample was set at 0.62. A calibration curve of OTA was constructed in the range 10-40 microg OTA kg(-1). Samples with OTA levels higher than 20 microg kg(-1) were separated from samples contaminated with a lower concentration (10 microg OTA kg(-1)) and from uncontaminated samples. The reported methodology is a reliable and simple technique for screening dried vine fruit for OTA.
Non-invasive method for quantitative evaluation of exogenous compound deposition on skin.
Stamatas, Georgios N; Wu, Jeff; Kollias, Nikiforos
2002-02-01
Topical application of active compounds on skin is common to both pharmaceutical and cosmetic industries. Quantification of the concentration of a compound deposited on the skin is important in determining the optimum formulation to deliver the pharmaceutical or cosmetic benefit. The most commonly used techniques to date are either invasive or not easily reproducible. In this study, we have developed a noninvasive alternative to these techniques based on spectrofluorimetry. A mathematical model based on diffusion approximation theory is utilized to correct fluorescence measurements for the attenuation caused by endogenous skin chromophore absorption. The limitation is that the compound of interest has to be either fluorescent itself or fluorescently labeled. We used the method to detect topically applied salicylic acid. Based on the mathematical model a calibration curve was constructed that is independent of endogenous chromophore concentration. We utilized the method to localize salicylic acid in epidermis and to follow its dynamics over a period of 3 d.
Homogeneity of GAFCHROMIC EBT2 film among different lot numbers
Takahashi, Yutaka; Tanaka, Atsushi; Hirayama, Takamitsu; Yamaguchi, Tsuyoshi; Katou, Hiroaki; Takahara, Keiko; Okamoto, Yoshiaki; Teshima, Teruki
2012-01-01
EBT2 film is widely used for quality assurance in radiation therapy. The purpose of this study was to investigate the homogeneity of EBT2 film among various lots, and the dose dependence of heterogeneity. EBT2 film was positioned in the center of a flatbed scanner and scanned in transmission mode at 75 dpi. Homogeneity was investigated by evaluating gray value and net optical density (netOD) with the red color channel. The dose dependence of heterogeneity in a single sheet from five lots was investigated at 0.5, 2, and 3 Gy. Maximum coefficient of variation as evaluated by netOD in a single film was 3.0% in one lot, but no higher than 0.5% in other lots. Dose dependence of heterogeneity was observed on evaluation by gray value but not on evaluation by netOD. These results suggest that EBT2 should be examined in each lot number before clinical use, and that the dose calibration curve should be constructed using netOD. PACS number: 87 PMID:22766947
Simultaneous determination of all polyphenols in vegetables, fruits, and teas.
Sakakibara, Hiroyuki; Honda, Yoshinori; Nakagawa, Satoshi; Ashida, Hitoshi; Kanazawa, Kazuki
2003-01-29
Polyphenols, which have beneficial effects on health and occur ubiquitously in plant foods, are extremely diverse. We developed a method for simultaneously determining all the polyphenols in foodstuffs, using HPLC and a photodiode array to construct a library comprising retention times, spectra of aglycons, and respective calibration curves for 100 standard chemicals. The food was homogenized in liquid nitrogen, lyophilized, extracted with 90% methanol, and subjected to HPLC without hydrolysis. The recovery was 68-92%, and the variation in reproducibility ranged between 1 and 9%. The HPLC eluted polyphenols with good resolution within 95 min in the following order: simple polyphenols, catechins, anthocyanins, glycosides of flavones, flavonols, isoflavones and flavanones, their aglycons, anthraquinones, chalcones, and theaflavins. All the polyphenols in 63 vegetables, fruits, and teas were then examined in terms of content and class. The present method offers accuracy by avoiding the decomposition of polyphenols during hydrolysis, the ability to determine aglycons separately from glycosides, and information on simple polyphenol levels simultaneously.
Application of in operando UV/Vis spectroscopy in lithium-sulfur batteries.
Patel, Manu U M; Dominko, Robert
2014-08-01
Application of UV/Vis spectroscopy for the qualitative and quantitative determination of differences in the mechanism of lithium-sulfur battery behavior is presented. With the help of catholytes prepared from chemically synthesized stoichiometric mixtures of lithium and sulfur, calibration curves for two different types of electrolyte can be constructed. First-order derivatives of UV/Vis spectra show five typical derivative peak positions in both electrolytes. In operando measurements show a smooth change in the UV/Vis spectra in the wavelength region between λ=650 and 400 nm. Derivatives are in agreement with derivative peak positions observed with catholytes. Recalculation of normalized reflections of UV/Vis spectra obtained in operando mode enable the formation of polysulfides and their concentrations to be followed. In such a way, it is possible to distinguish differences in the mechanism of polysulfide shuttling between two electrolytes and to correlate differences in capacity fading. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Preparation and characterization of monoclonal antibody against melatonin.
Soukhtanloo, Mohammad; Ansari, Mohammad; Paknejad, Maliheh; Parizadeh, Mohammad Reza; Rasaee, Mohammad Javad
2008-06-01
Anti-melatonin monoclonal antibodies (MAb) were prepared following coupling melatonin to bovine serum albumin (BSA) by Mannich reaction. Balb/c mice were immunized via injection of the melatonin-BSA intraperitonally. The spleen cells producing high titer of antibody were fused with myeloma cells of SP2/0 origin. After two limiting dilutions, two stable clones (AS-H10 and AS-D26) exhibiting best properties were selected for further studies. The class and subclass of two MAbs were found to be IgG(1) and IgG(2a) with lambda and kappa light chains, respectively. Antibodies secreted by these two clones showed high affinity of about 10(9)M(1). Study of the specificity criteria showed that these clones had no cross reactivity with indolic, aromatic, and imidazole ring-containing compounds, and had high specificity towards melatonin. The calibration curve was constructed with a sensitivity range of 10 ng/mL to 10 microg/mL. In conclusion, these MAbs may be useful for immunoassay of melatonin.
Use of relativistic rise in ionization chambers for measurement of high energy heavy nuclei
NASA Technical Reports Server (NTRS)
Barthelmy, S. D.; Israel, M. H.; Klarmann, J.; Vogel, J. S.
1983-01-01
A balloon-borne instrument has been constructed to measure the energy spectra of cosmic-ray heavy nuclei in the range of about 0.3 to about 100 GeV/amu. It makes use of the relativistic rise portion of the Bethe-Bloch curve in ionization chambers for energy determination in the 10- to 100-GeV/amu interval. The instrument consists of six layers of dual-gap ionization chambers for energy determination above 10 GeV/amu. Charge is determined with a NE114 scintillator and a Pilot 425 plastic Cerenkov counter. A CO2 gas Cerenkov detector (1 atm; threshold of 30 GeV/amu) calibrates the ion chambers in the relativistic rise region. The main emphasis of the instrument is the determination of the change of the ratio of Iron (26) to the Iron secondaries (21-25) in the energy range of 10 to 100 GeV/amu. Preliminary data from a balloon flight in the fall of 1982 from Palestine, TX is presented.
Electromagnetic sensing for deterministic finishing gridded domes
NASA Astrophysics Data System (ADS)
Galbraith, Stephen L.
2013-06-01
Electromagnetic sensing is a promising technology for precisely locating conductive grid structures that are buried in optical ceramic domes. Burying grid structures directly in the ceramic makes gridded dome construction easier, but a practical sensing technology is required to locate the grid relative to the dome surfaces. This paper presents a novel approach being developed for locating mesh grids that are physically thin, on the order of a mil, curved, and 75% to 90% open space. Non-contact location sensing takes place over a distance of 1/2 inch. A non-contact approach was required because the presence of the ceramic material precludes touching the grid with a measurement tool. Furthermore, the ceramic which may be opaque or transparent is invisible to the sensing technology which is advantageous for calibration. The paper first details the physical principles being exploited. Next, sensor impedance response is discussed for thin, open mesh, grids versus thick, solid, metal conductors. Finally, the technology approach is incorporated into a practical field tool for use in inspecting gridded domes.
Lunar Occultations, Setting the Stage for VLTI: The Case Study of CW-Leo (aka IRC+10216)
NASA Astrophysics Data System (ADS)
Käufl, Hans Ulrich; Stecklum, Bringfried; Richter, Steffen; Richichi, Andrea
Lunar occultation allows for a sneak preview of what the VLTI will observe, both with comparable angular resolution and sensitivity. In the thermal infrared ( λ ≈ 10μ m, angular resolution ≤ 0.03^' ') the technique has been pioneered with TIMMI on La Silla. Using this technique several dust shells around Asymptotic Giant Branch stars have been resolved. For the Carbon star CW-Leo (IRC+10 216) high S/N scans will allow for `11/2-dimensional' imaging of the source. At the present state of data reduction the light curves already provide for a very convincing proof of theories on the milli-arcsec scale. In combination with VLTI the technique allows for checks of the visibility calibration and related issues. Moreover, in the (u,v)-plane both techniques are extremely complementary, so that a merging of the data sets appear highly desirable. At La Silla and Paranal ESO a suite of instruments which can be (ab)used for this project is under construction.
Wang, Maocai; Dai, Guangming; Choo, Kim-Kwang Raymond; Jayaraman, Prem Prakash; Ranjan, Rajiv
2016-01-01
Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user's public key based on the user's identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al.
Dai, Guangming
2016-01-01
Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user’s public key based on the user’s identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al. PMID:27564373
Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.
Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J
2010-12-01
Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.
Billard, Hélène; Simon, Laure; Desnots, Emmanuelle; Sochard, Agnès; Boscher, Cécile; Riaublanc, Alain; Alexandre-Gouabau, Marie-Cécile; Boquien, Clair-Yves
2016-08-01
Human milk composition analysis seems essential to adapt human milk fortification for preterm neonates. The Miris human milk analyzer (HMA), based on mid-infrared methodology, is convenient for a unique determination of macronutrients. However, HMA measurements are not totally comparable with reference methods (RMs). The primary aim of this study was to compare HMA results with results from biochemical RMs for a large range of protein, fat, and carbohydrate contents and to establish a calibration adjustment. Human milk was fractionated in protein, fat, and skim milk by covering large ranges of protein (0-3 g/100 mL), fat (0-8 g/100 mL), and carbohydrate (5-8 g/100 mL). For each macronutrient, a calibration curve was plotted by linear regression using measurements obtained using HMA and RMs. For fat, 53 measurements were performed, and the linear regression equation was HMA = 0.79RM + 0.28 (R(2) = 0.92). For true protein (29 measurements), the linear regression equation was HMA = 0.9RM + 0.23 (R(2) = 0.98). For carbohydrate (15 measurements), the linear regression equation was HMA = 0.59RM + 1.86 (R(2) = 0.95). A homogenization step with a disruptor coupled to a sonication step was necessary to obtain better accuracy of the measurements. Good repeatability (coefficient of variation < 7%) and reproducibility (coefficient of variation < 17%) were obtained after calibration adjustment. New calibration curves were developed for the Miris HMA, allowing accurate measurements in large ranges of macronutrient content. This is necessary for reliable use of this device in individualizing nutrition for preterm newborns. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Dill, Loren H.; Choo, Yung K. (Technical Monitor)
2004-01-01
Software was developed to construct approximating NURBS curves for iced airfoil geometries. Users specify a tolerance that determines the extent to which the approximating curve follows the rough ice. The user can therefore smooth the ice geometry in a controlled manner, thereby enabling the generation of grids suitable for numerical aerodynamic simulations. Ultimately, this ability to smooth the ice geometry will permit studies of the effects of smoothing upon the aerodynamics of iced airfoils. The software was applied to several different types of iced airfoil data collected in the Icing Research Tunnel at NASA Glenn Research Center, and in all cases was found to efficiently generate suitable approximating NURBS curves. This method is an improvement over the current "control point formulation" of Smaggice (v.1.2). In this report, we present the relevant theory of approximating NURBS curves and discuss typical results of the software.
Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael
2018-03-09
To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.
Microprocessor controlled anodic stripping voltameter for trace metals analysis in tap water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clem, R.G.; Park, F.W.; Kirsten, F.A.
1981-04-01
The construction and use of a portable, microprocessor controlled anodic stripping voltameter for on-site simultaneous metal analysis of copper, lead and cadmium in tap water is discussed. The instrumental system is comprised of a programmable controller which permits keying in analytical parameters such as sparge time and plating time; a rotating cell for efficient oxygen removal and amalgam formation; and, a magnetic tape which can be used for data storage. Analysis time can be as short as 10 to 15 minutes. The stripping analysis is based on a pre-measurement step during which the metals from a water sample are concentratedmore » into a thin mercury film by deposition from an acetate solution of pH 4.5. The concentrated metals are then electrochemically dissolved from the film by application of a linearly increasing anodic potential. Typical peak-shaped curves are obtained. The heights of these curves are related to the concentration of metals in the water by calibration data. Results of tap water analysis showed 3 +- 1 ..mu..g/L lead, 22 +- 0.3 ..mu..g/L copper, and less than 0.2 ..mu..g/L cadmium for a Berkeley, California tap water, and 1 to 1000 ..mu..g/L Cu, 1 to 2 ..mu..g/L Pb for ten samples of Seattle, Washington tap water. Recommendations are given for a next generation instrument system.« less
Probability of identification: adulteration of American Ginseng with Asian Ginseng.
Harnly, James; Chen, Pei; Harrington, Peter De B
2013-01-01
The AOAC INTERNATIONAL guidelines for validation of botanical identification methods were applied to the detection of Asian Ginseng [Panax ginseng (PG)] as an adulterant for American Ginseng [P. quinquefolius (PQ)] using spectral fingerprints obtained by flow injection mass spectrometry (FIMS). Samples of 100% PQ and 100% PG were physically mixed to provide 90, 80, and 50% PQ. The multivariate FIMS fingerprint data were analyzed using soft independent modeling of class analogy (SIMCA) based on 100% PQ. The Q statistic, a measure of the degree of non-fit of the test samples with the calibration model, was used as the analytical parameter. FIMS was able to discriminate between 100% PQ and 100% PG, and between 100% PQ and 90, 80, and 50% PQ. The probability of identification (POI) curve was estimated based on the SD of 90% PQ. A digital model of adulteration, obtained by mathematically summing the experimentally acquired spectra of 100% PQ and 100% PG in the desired ratios, agreed well with the physical data and provided an easy and more accurate method for constructing the POI curve. Two chemometric modeling methods, SIMCA and fuzzy optimal associative memories, and two classification methods, partial least squares-discriminant analysis and fuzzy rule-building expert systems, were applied to the data. The modeling methods correctly identified the adulterated samples; the classification methods did not.
Predicting the disinfection efficiency range in chlorine contact tanks through a CFD-based approach.
Angeloudis, Athanasios; Stoesser, Thorsten; Falconer, Roger A
2014-09-01
In this study three-dimensional computational fluid dynamics (CFD) models, incorporating appropriately selected kinetic models, were developed to simulate the processes of chlorine decay, pathogen inactivation and the formation of potentially carcinogenic by-products in disinfection contact tanks (CTs). Currently, the performance of CT facilities largely relies on Hydraulic Efficiency Indicators (HEIs), extracted from experimentally derived Residence Time Distribution (RTD) curves. This approach has more recently been aided with the application of CFD models, which can be calibrated to predict accurately RTDs, enabling the assessment of disinfection facilities prior to their construction. However, as long as it depends on HEIs, the CT design process does not directly take into consideration the disinfection biochemistry which needs to be optimized. The main objective of this study is to address this issue by refining the modelling practices to simulate some reactive processes of interest, while acknowledging the uneven contact time stemming from the RTD curves. Initially, the hydraulic performances of seven CT design variations were reviewed through available experimental and computational data. In turn, the same design configurations were tested using numerical modelling techniques, featuring kinetic models that enable the quantification of disinfection operational parameters. Results highlight that the optimization of the hydrodynamic conditions facilitates a more uniform disinfectant contact time, which correspond to greater levels of pathogen inactivation and a more controlled by-product accumulation. Copyright © 2014 Elsevier Ltd. All rights reserved.
The magnetodynamic filters in monitoring the contaminants from polluted water systems (abstract)
NASA Astrophysics Data System (ADS)
Swarup, R.; Singh, Bharat
1994-05-01
The magnetic interaction seems to influence the ``structural memory'' of water systems which is quenched in ideally pure water. The sedentary lifetime of each water molecule is extremely short (10-10 s) and its molecular structures may be influenced by some physical effect like magnetic field treatment, it's space time gradients, water velocity, pressure drop, etc. in the interpolar space, so as to yield a noticeable temporal magnetopotential development characterizing the properties of homogeneous and heterogeneous water systems. This principle is also extended to prevailing water systems which always contain various impurities, gas, molecules, ions, microscopic particles in random order. Still the existence of structural memory may be verified by reliable experimental data. The magnetopotential curves of different water systems depict the design and develop-software package for constructing the magnetodynamic-filters superior to the existing techniques on pollution studies like remote sensing, muon spin resonance, laser spectroscopy, nuclear techniques, the gamma ray peak efficiency method, trace elemental characterization due to NBS, neutron activation analysis, and graphite furnance atomic absorption spectrometer. The physiochemical characteristics of water calibrated in terms of magnetopotential curves change with the removal of dissolved gasses, impurities, thermal activation, etc. and the algae, bacteria, phosphates, etc. have been removed at a rapid rate. The magnetodynamic study of ganga water proves it to be an extremely pure and highly resourced fluid.
Visible and near-infrared imaging spectrometer (VNIS) for in-situ lunar surface measurements
NASA Astrophysics Data System (ADS)
He, Zhiping; Xu, Rui; Li, Chunlai; Lv, Gang; Yuan, Liyin; Wang, Binyong; Shu, Rong; Wang, Jianyu
2015-10-01
The Visible and Near-Infrared Imaging Spectrometer (VNIS) onboard China's Chang'E 3 lunar rover is capable of simultaneously in situ acquiring full reflectance spectra for objects on the lunar surface and performing calibrations. VNIS uses non-collinear acousto-optic tunable filters and consists of a VIS/NIR imaging spectrometer (0.45-0.95 μm), a shortwave IR spectrometer (0.9-2.4 μm), and a calibration unit with dust-proofing functionality. To been underwent a full program of pre-flight ground tests, calibrations, and environmental simulation tests, VNIS entered into orbit around the Moon on 6 December 2013 and landed on 14 December 2013 following Change'E 3. The first operations of VNIS were conducted on 23 December 2013, and include several explorations and calibrations to obtain several spectral images and spectral reflectance curves of the lunar soil in the Imbrium region. These measurements include the first in situ spectral imaging detections on the lunar surface. This paper describes the VNIS characteristics, lab calibration, in situ measurements and calibration on lunar surface.
Calibration of areal surface topography measuring instruments
NASA Astrophysics Data System (ADS)
Seewig, J.; Eifler, M.
2017-06-01
The ISO standards which are related to the calibration of areal surface topography measuring instruments are the ISO 25178-6xx series which defines the relevant metrological characteristics for the calibration of different measuring principles and the ISO 25178-7xx series which defines the actual calibration procedures. As the field of areal measurement is however not yet fully standardized, there are still open questions to be addressed which are subject to current research. Based on this, selected research results of the authors in this area are presented. This includes the design and fabrication of areal material measures. For this topic, two examples are presented with the direct laser writing of a stepless material measure for the calibration of the height axis which is based on the Abbott- Curve and the manufacturing of a Siemens star for the determination of the lateral resolution limit. Based on these results, as well a new definition for the resolution criterion, the small scale fidelity, which is still under discussion, is presented. Additionally, a software solution for automated calibration procedures is outlined.
DOT National Transportation Integrated Search
2006-02-01
Constructing a pavement that will perform well throughout its expected design life is the main goal of any highway agency. The relationship between construction parameters and pavement life, defined by structural models, can be described using materi...
Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model
Wu, J.; Shenk, G.W.; Raffensperger, Jeff P.; Moyer, D.; Linker, L.C.; ,
2005-01-01
Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.
NASA Astrophysics Data System (ADS)
Sun, Li-wei; Ye, Xin; Fang, Wei; He, Zhen-lei; Yi, Xiao-long; Wang, Yu-peng
2017-11-01
Hyper-spectral imaging spectrometer has high spatial and spectral resolution. Its radiometric calibration needs the knowledge of the sources used with high spectral resolution. In order to satisfy the requirement of source, an on-orbit radiometric calibration method is designed in this paper. This chain is based on the spectral inversion accuracy of the calibration light source. We compile the genetic algorithm progress which is used to optimize the channel design of the transfer radiometer and consider the degradation of the halogen lamp, thus realizing the high accuracy inversion of spectral curve in the whole working time. The experimental results show the average root mean squared error is 0.396%, the maximum root mean squared error is 0.448%, and the relative errors at all wavelengths are within 1% in the spectral range from 500 nm to 900 nm during 100 h operating time. The design lays a foundation for the high accuracy calibration of imaging spectrometer.
Importance of Calibration Method in Central Blood Pressure for Cardiac Structural Abnormalities.
Negishi, Kazuaki; Yang, Hong; Wang, Ying; Nolan, Mark T; Negishi, Tomoko; Pathan, Faraz; Marwick, Thomas H; Sharman, James E
2016-09-01
Central blood pressure (CBP) independently predicts cardiovascular risk, but calibration methods may affect accuracy of central systolic blood pressure (CSBP). Standard central systolic blood pressure (Stan-CSBP) from peripheral waveforms is usually derived with calibration using brachial SBP and diastolic BP (DBP). However, calibration using oscillometric mean arterial pressure (MAP) and DBP (MAP-CSBP) is purported to provide more accurate representation of true invasive CSBP. This study sought to determine which derived CSBP could more accurately discriminate cardiac structural abnormalities. A total of 349 community-based patients with risk factors (71±5years, 161 males) had CSBP measured by brachial oscillometry (Mobil-O-Graph, IEM GmbH, Stolberg, Germany) using 2 calibration methods: MAP-CSBP and Stan-CSBP. Left ventricular hypertrophy (LVH) and left atrial dilatation (LAD) were measured based on standard guidelines. MAP-CSBP was higher than Stan-CSBP (149±20 vs. 128±15mm Hg, P < 0.0001). Although they were modestly correlated (rho = 0.74, P < 0.001), the Bland-Altman plot demonstrated a large bias (21mm Hg) and limits of agreement (24mm Hg). In receiver operating characteristic (ROC) curve analyses, MAP-CSBP significantly better discriminated LVH compared with Stan-CSBP (area under the curve (AUC) 0.66 vs. 0.59, P = 0.0063) and brachial SBP (0.62, P = 0.027). Continuous net reclassification improvement (NRI) (P < 0.001) and integrated discrimination improvement (IDI) (P < 0.001) corroborated superior discrimination of LVH by MAP-CSBP. Similarly, MAP-CSBP better distinguished LAD than Stan-CSBP (AUC 0.63 vs. 0.56, P = 0.005) and conventional brachial SBP (0.58, P = 0.006), whereas Stan-CSBP provided no better discrimination than conventional brachial BP (P = 0.09). CSBP is calibration dependent and when oscillometric MAP and DBP are used, the derived CSBP is a better discriminator for cardiac structural abnormalities. © American Journal of Hypertension, Ltd 2016. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A simple topography-driven, calibration-free runoff generation model
NASA Astrophysics Data System (ADS)
Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.
2017-12-01
Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader geoscience studies beyond hydrology.
Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli
2016-03-07
In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAM_S phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.
NASA Astrophysics Data System (ADS)
Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli
2016-03-01
In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAMS phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.
NASA Astrophysics Data System (ADS)
Liu, Hai-Zheng; Shi, Ze-Lin; Feng, Bin; Hui, Bin; Zhao, Yao-Hong
2016-03-01
Integrating microgrid polarimeters on focal plane array (FPA) of an infrared detector causes non-uniformity of polarization response. In order to reduce the effect of polarization non-uniformity, this paper constructs an experimental setup for capturing raw flat-field images and proposes a procedure for acquiring non-uniform calibration (NUC) matrix and calibrating raw polarization images. The proposed procedure takes the incident radiation as a polarization vector and offers a calibration matrix for each pixel. Both our matrix calibration and two-point calibration are applied to our mid-wavelength infrared (MWIR) polarization imaging system with integrated microgrid polarimeters. Compared with two point calibration, our matrix calibration reduces non-uniformity by 30 40% under condition of flat-field data test with polarization. The ourdoor scene observation experiment indicates that our calibration can effectively reduce polarization non-uniformity and improve the image quality of our MWIR polarization imaging system.
Measuring the radon concentration in air meting van de radonconcentratie in lucht
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aten, J.B.T.; Bierhuizen, H.W.J.; Vanhoek, L.P.
1975-01-01
A simple transportable apparatus for measurement of the radon concentration in the air of a workshop was developed. An air sample is sucked through a filter and the decay curve of the alpha activity is measured. The counting speed 40 min after sampling gives an indication of the radon activity. The apparatus was calibrated by analyzing an analogous decay curve obtained with a big filter and a big air sample, the activity being measured with an anti-coincidence counter. (GRA)
NASA Technical Reports Server (NTRS)
Baird, James K.
1987-01-01
For the purpose of determining diffusion coefficients as required for electrodeposition studies and other applications, a diaphragm cell and an isothermal water bath were constructed. the calibration of the system is discussed. On the basis of three calibration runs on the diaphram cell, researchers concluded that the cell constant beta equals 0.12 cm -2 . Other calibration runs in progress should permit the cell constant to be determined with an accuracy of one percent.
The photomultiplier tube calibration system of the MicroBooNE experiment
Conrad, J.; Jones, B. J. P.; Moss, Z.; ...
2015-06-03
Here, we report on the design and construction of a LED-based fiber calibration system for large liquid argon time projection detectors. This system was developed to calibrate the optical systems of the MicroBooNE experiment. As well as detailing the materials and installation procedure, we provide technical drawings and specifications so that the system may be easily replicated in future LArTPC detectors.
NASA Astrophysics Data System (ADS)
Barr, D.; Gilpatrick, J. D.; Martinez, D.; Shurter, R. B.
2004-11-01
The Los Alamos Neutron Science Center (LANSCE) facility at Los Alamos National Laboratory has constructed both an Isotope Production Facility (IPF) and a Switchyard Kicker (XDK) as additions to the H+ and H- accelerator. These additions contain eleven Beam Position Monitors (BPMs) that measure the beam's position throughout the transport. The analog electronics within each processing module determines the beam position using the log-ratio technique. For system reliability, calibrations compensate for various temperature drifts and other imperfections in the processing electronics components. Additionally, verifications are periodically implemented by a PC running a National Instruments LabVIEW virtual instrument (VI) to verify continued system and cable integrity. The VI communicates with the processor cards via a PCI/MXI-3 VXI-crate communication module. Previously, accelerator operators performed BPM system calibrations typically once per day while beam was explicitly turned off. One of this new measurement system's unique achievements is its automated calibration and verification capability. Taking advantage of the pulsed nature of the LANSCE-facility beams, the integrated electronics hardware and VI perform calibration and verification operations between beam pulses without interrupting production beam delivery. The design, construction, and performance results of the automated calibration and verification portion of this position measurement system will be the topic of this paper.
Wang, Hai-Feng; Lu, Hai; Li, Jia; Sun, Guo-Hua; Wang, Jun; Dai, Xin-Hua
2014-02-01
The present paper reported the differential scanning calorimetry-thermogravimetry curves and the infrared (IR) absorption spectrometry under the temperature program analyzed by the combined simultaneous thermal analysis-IR spectrometer. The gas products of coal were identified by the IR spectrometry. This paper emphasized on the combustion at high temperature-IR absorption method, a convenient and accurate method, which measures the content of sulfur in coal indirectly through the determination of the content of sulfur dioxide in the mixed gas products by IR absorption. It was demonstrated, when the instrument was calibrated by varied pure compounds containing sulfur and certified reference materials (CRMs) for coal, that there was a large deviation in the measured sulfur contents. It indicates that the difference in chemical speciations of sulfur between CRMs and the analyte results in a systematic error. The time-IR absorption curve was utilized to analyze the composition of sulfur at low temperatures and high temperatures and then the sulfur content of coal sample was determined by using a CRM for coal with a close composition of sulfur. Therefore, the systematic error due to the difference in chemical speciations of sulfur between the CRM and analyte was eliminated. On the other hand, in this combustion at high temperature-IR absorption method, the mass of CRM and analyte were adjusted to assure the sulfur mass equal and then the CRM and the analyte were measured alternately. This single-point calibration method reduced the effect of the drift of the IR detector and improved the repeatability of results, compared with the conventional multi-point calibration method using the calibration curves of signal intensity vs sulfur mass. The sulfur content results and their standard deviations of an anthracite coal and a bituminous coal with a low sulfur content determined by this modified method were 0.345% (0.004%) and 0.372% (0.008%), respectively. The uncertainty (U, k =2) of sulfur contents of two coal samples was evaluated to be 0.019% and 0.021%, respectively. Two main modifications, namely the calibration using the coal CRM with a similar composition of low-temperature sulfur and high temperature sulfur, and the single-point calibration alternating CRM and analyte, endow the combustion at high temperature-IR absorption method with an accuracy obviously better than that of the ASTM method. Therefore, this modified method has a well potential in the analysis of sulfur content.
1985-09-01
Calibration 44 3.1.3 The SPIDER Calibration 45 3.1.*» Thermistor Temperature Detector Calibration. . 45 3.2 Amplifier Calibration 45 3.2.1...of a material with high conductivity and preferably high permeability. For the bunker construction, welded one-inch soft-steel plates were chosen for...Kovar flanges (metal- to-ceramic seal). The external plates are hel1arc- welded to the flanges. The external plate facing away from the incoming
Refinement of moisture calibration curves for nuclear gage.
DOT National Transportation Integrated Search
1973-01-01
Over the last three years the Virginia Highway Research Council has directed a research effort toward improving the method of determining the moisture content of soils with a nuclear gage. The first task in this research was the determination of the ...
Validation of pavement performance curves for the mechanistic-empirical pavement design guide.
DOT National Transportation Integrated Search
2009-02-01
The objective of this research is to determine whether the nationally calibrated performance models used in the Mechanistic-Empirical : Pavement Design Guide (MEPDG) provide a reasonable prediction of actual field performance, and if the desired accu...
SENSITIVITY ANALYSIS OF THE USEPA WINS PM 2.5 SEPARATOR
Factors affecting the performance of the US EPA WINS PM2.5 separator have been systematically evaluated. In conjunction with the separator's laboratory calibrated penetration curve, analysis of the governing equation that describes conventional impactor performance was used to ...
A Simple Experiment Demonstrating the Relationship between Response Curves and Absorption Spectra.
ERIC Educational Resources Information Center
Li, Chia-yu
1984-01-01
Describes an experiment for recording two individual spectrophotometer response curves. The two curves are directly related to the power of transmitted beams that pass through a solvent and solution. An absorption spectrum of the solution can be constructed from the calculated rations of the curves as a function of wavelength. (JN)
Towards a global network of gamma-ray detector calibration facilities
NASA Astrophysics Data System (ADS)
Tijs, Marco; Koomans, Ronald; Limburg, Han
2016-09-01
Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.
NASA Technical Reports Server (NTRS)
Cohen, Martin; Witteborn, Fred C.; Walker, Russell, G.; Bregman, Jesse D.; Wooden, Diane H.
1995-01-01
Five new absolutely calibrated continuous stellar spectra from 1.2 to 35 microns are presented. The spectra were constructed as far as possible from actual observed spectral fragments taken from the ground, the Kuiper Airborne Observatory (KAO), and the IRAS Low Resolution Spectrometer (LRS). These stars (beta Peg, alpha Boo, beta And, beta Gem, and alpha Hya) augment the author's already created complete absolutely calibrated spectrum for alpha Tau. All these spectra have a common calibration pedigree. The wavelength coverage is ideal for calibration of many existing and proposed ground-based, airborne, and satellite sensors.
NASA Technical Reports Server (NTRS)
Woodfill, J. R.; Thomson, F. J.
1979-01-01
The paper deals with the design, construction, and applications of an active/passive multispectral scanner combining lasers with conventional passive remote sensors. An application investigation was first undertaken to identify remote sensing applications where active/passive scanners (APS) would provide improvement over current means. Calibration techniques and instrument sensitivity are evaluated to provide predictions of the APS's capability to meet user needs. A preliminary instrument design was developed from the initial conceptual scheme. A design review settled the issues of worthwhile applications, calibration approach, hardware design, and laser complement. Next, a detailed mechanical design was drafted and construction of the APS commenced. The completed APS was tested and calibrated in the laboratory, then installed in a C-47 aircraft and ground tested. Several flight tests completed the test program.
Effects of the Venusian atmosphere on incoming meteoroids and the impact crater population
NASA Technical Reports Server (NTRS)
Herrick, Robert R.; Phillips, Roger J.
1994-01-01
The dense atmosphere on Venus prevents craters smaller than about 2 km in daimater from forming and also causes formation of several crater fields and multiple-floored craters (collectively referred to as multiple impacts). A model has been constructed that simulates the behavior of a meteoroid in a dense planetary atmosphere. This model was then combined with an assumed flux of incoming meteoroids in an effort to reproduce the size-frequency distribution of impact craters and several aspects of the population of the crater fields and multiple-floored craters on Venus. The modeling indicates that it is plausible that the observed rollover in the size-frequency curve for Venus is due entirely to atmospheric effects on incoming meteoroids. However, there must be substantial variation in the density and behavior of incoming meteoroids in the atmosphere. Lower-density meteoroids must be less likely to survive atmospheric passage than simple density differences can account for. Consequently, it is likely that the percentage of craters formed by high-density meteoroids is very high at small crater diameters, and this percentage decreases substantially with increasing crater diameter. Overall, high-density meteoroids created a disproportionately large percentage of the impact craters on Venus. Also, our results indicate that a process such as meteoroid flattening or atmospheric explosion of meteoroids must be invoked to prevent craters smaller than the observed minimum diameter (2 km) from forming. In terms of using the size-frequency distribution to age-date the surface, the model indicates that the observed population has at least 75% of the craters over 32 km in diameter that would be expected on an atmosphereless Venus; thus, this part of the curve is most suitable for comparison with calibrated curves for the Moon.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
A new calibration code for the JET polarimeter.
Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E
2010-05-01
An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.
On the Analysis and Construction of the Butterfly Curve Using "Mathematica"[R
ERIC Educational Resources Information Center
Geum, Y. H.; Kim, Y. I.
2008-01-01
The butterfly curve was introduced by Temple H. Fay in 1989 and defined by the polar curve r = e[superscript cos theta] minus 2 cos 4 theta plus sin[superscript 5] (theta divided by 12). In this article, we develop the mathematical model of the butterfly curve and analyse its geometric properties. In addition, we draw the butterfly curve and…
Results of the 1999 JPL Balloon Flight Solar Cell Calibration Program
NASA Technical Reports Server (NTRS)
Anspaugh, B. E.; Mueller, R. L.; Weiss, R. S.
2000-01-01
The 1999 solar cell calibration balloon flight campaign consisted of two flights, which occurred on June 14, 1999, and July 6, 1999. All objectives of the flight program were met. Fifty-seven modules were carried to an altitude of approximately equal to 120,000 ft (36.6 km). Full I-V curves were measured on five of these modules, and output at a fixed load was measured on forty-three modules (forty-five cells), with some modules repeated on the second flight. This data was corrected to 28 C and to 1 AU (1.496 x 10 (exp 8) km). The calibrated cells have been returned to their owners and can now be used as reference standards in simulator testing of cells and arrays.
Attaining the Photometric Precision Required by Future Dark Energy Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stubbs, Christopher
2013-01-21
This report outlines our progress towards achieving the high-precision astronomical measurements needed to derive improved constraints on the nature of the Dark Energy. Our approach to obtaining higher precision flux measurements has two basic components: 1) determination of the optical transmission of the atmosphere, and 2) mapping out the instrumental photon sensitivity function vs. wavelength, calibrated by referencing the measurements to the known sensitivity curve of a high precision silicon photodiode, and 3) using the self-consistency of the spectrum of stars to achieve precise color calibrations.
Demystifying liver iron concentration measurements with MRI.
Henninger, B
2018-06-01
This Editorial comment refers to the article: Non-invasive measurement of liver iron concentration using 3-Tesla magnetic resonance imaging: validation against biopsy. D'Assignies G, et al. Eur Radiol Nov 2017. • MRI is a widely accepted reliable tool to determine liver iron concentration. • MRI cannot measure iron directly, it needs calibration. • Calibration curves for 3.0T are rare in the literature. • The study by d'Assignies et al. provides valuable information on this topic. • Evaluation of liver iron overload should no longer be restricted to experts.
NASA Technical Reports Server (NTRS)
Haynie, C. C.
1980-01-01
Simple gage, used with template, can help inspectors determine whether three-dimensional curved surface has correct contour. Gage was developed as aid in explosive forming of Space Shuttle emergency-escape hatch. For even greater accuracy, wedge can be made of metal and calibrated by indexing machine.
Information Management Systems in the Undergraduate Instrumental Analysis Laboratory.
ERIC Educational Resources Information Center
Merrer, Robert J.
1985-01-01
Discusses two applications of Laboratory Information Management Systems (LIMS) in the undergraduate laboratory. They are the coulometric titration of thiosulfate with electrogenerated triiodide ion and the atomic absorption determination of calcium using both analytical calibration curve and standard addition methods. (JN)
The Measurement of Magnetic Fields
ERIC Educational Resources Information Center
Berridge, H. J. J.
1973-01-01
Discusses five experimental methods used by senior high school students to provide an accurate calibration curve of magnet current against the magnetic flux density produced by an electromagnet. Compares the relative merits of the five methods, both as measurements and from an educational viewpoint. (JR)
Diagnosing Prion Diseases: Mass Spectrometry-Based Approaches
USDA-ARS?s Scientific Manuscript database
Mass spectrometry is an established means of quantitating the prions present in infected hamsters. Calibration curves relating the area ratios of the selected analyte peptides and their oxidized analogs to stable isotope labeled internal standards were prepared. The limit of detection (LOD) and limi...
NASA Astrophysics Data System (ADS)
Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.
2018-02-01
An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.
NASA Astrophysics Data System (ADS)
Li, Weidong; Shan, Xinjian; Qu, Chunyan
2010-11-01
In comparison with polar-orbiting satellites, geostationary satellites have a higher time resolution and wider field of visions, which can cover eleven time zones (an image covers about one third of the Earth's surface). For a geostationary satellite panorama graph at a point of time, the brightness temperature of different zones is unable to represent the thermal radiation information of the surface at the same point of time because of the effect of different sun solar radiation. So it is necessary to calibrate brightness temperature of different zones with respect to the same point of time. A model of calibrating the differences of the brightness temperature of geostationary satellite generated by time zone differences is suggested in this study. A total of 16 curves of four positions in four different stages are given through sample statistics of brightness temperature of every 5 days synthetic data which are from four different time zones (time zones 4, 6, 8, and 9). The above four stages span January -March (winter), April-June (spring), July-September (summer), and October-December (autumn). Three kinds of correct situations and correct formulas based on curves changes are able to better eliminate brightness temperature rising or dropping caused by time zone differences.
Magnetic nanoparticle thermometry independent of Brownian relaxation
NASA Astrophysics Data System (ADS)
Zhong, Jing; Schilling, Meinhard; Ludwig, Frank
2018-01-01
An improved method of magnetic nanoparticle (MNP) thermometry is proposed. The phase lag ϕ of the fundamental f 0 harmonic is measured to eliminate the influence of Brownian relaxation on the ratio of 3f 0 to f 0 harmonic amplitudes applying a phenomenological model, thus allowing measurements in high-frequency ac magnetic fields. The model is verified by simulations of the Fokker-Planck equation. An MNP spectrometer is calibrated for the measurements of the phase lag ϕ and the amplitudes of 3f 0 and f 0 harmonics. Calibration curves of the harmonic ratio and tanϕ are measured by varying the frequency (from 10 Hz to 1840 Hz) of ac magnetic fields with different amplitudes (from 3.60 mT to 4.00 mT) at a known temperature. A phenomenological model is employed to fit the calibration curves. Afterwards, the improved method is proposed to iteratively compensate the measured harmonic ratio with tanϕ, and consequently calculate temperature applying the static Langevin function. Experimental results on SHP-25 MNPs show that the proposed method significantly improves the systematic error to 2 K at maximum with a relative accuracy of about 0.63%. This demonstrates the feasibility of the proposed method for MNP thermometry with SHP-25 MNPs even if the MNP signal is affected by Brownian relaxation.
Calibration Method of an Ultrasonic System for Temperature Measurement
Zhou, Chao; Wang, Yueke; Qiao, Chunjie; Dai, Weihua
2016-01-01
System calibration is fundamental to the overall accuracy of the ultrasonic temperature measurement, and it is basically involved in accurately measuring the path length and the system latency of the ultrasonic system. This paper proposes a method of high accuracy system calibration. By estimating the time delay between the transmitted signal and the received signal at several different temperatures, the calibration equations are constructed, and the calibrated results are determined with the use of the least squares algorithm. The formulas are deduced for calculating the calibration uncertainties, and the possible influential factors are analyzed. The experimental results in distilled water show that the calibrated path length and system latency can achieve uncertainties of 0.058 mm and 0.038 μs, respectively, and the temperature accuracy is significantly improved by using the calibrated results. The temperature error remains within ±0.04°C consistently, and the percentage error is less than 0.15%. PMID:27788252
p-Curve and p-Hacking in Observational Research.
Bruns, Stephan B; Ioannidis, John P A
2016-01-01
The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2011-07-01
The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Effect of Cooking Process on the Residues of Three Carbamate Pesticides in Rice
Shoeibi, Shahram; Amirahmadi, Maryam; Yazdanpanah, Hassan; Pirali-Hamedani, Morteza; Pakzad, Saied Reza; Kobarfard, Farzad
2011-01-01
A gas chromatography mass spectrometry with spike calibration curve method was used to quantify three carbamate pesticides residue in cooked white rice and to estimate the reduction percentage of the cooking process duration. The selected pesticides are three carbamate pesticides including carbaryl and pirimicarb that their MRL is issued by “The Institute of Standards of Iran” and propoxur which is used as a widely consumed pesticide in rice. The analytical method entailed the following steps: 1- Blending 15 g cooked sample with 120 mL acetonitrile for 1 min in solvent proof blender, 2- Adding 6 g NaCl and blending for 1 min, 3- Filtering upper layer through 25 g anhydrous Na2SO4, 4- Cleaning up with PSA and MgSO4, 5- Centrifuging for 7 min, 6- Evaporating about 0.3 mL and reconstituting in toluene till 1 mL, 7- Injecting 2 μL extract into GC/MS and analyzing by single quadruple selected ion monitoring GC/MS-SQ-SIM. The concentration of pesticides and the percentage of pesticides amounts after the cooking were determined by gas chromatography mass-spectrometry (GC/MS) using with interpolation of the relative peak areas for each pesticide to internal standard peak area in the sample on the spiked calibration curve. Calibration curve was linear over the range of 25 to 1000 ng/g, and LOQ was 25 ng/g for all three pesticides. The percent of loss for the three pesticides were 78%, 55% and 35% for carbaryl, propoxur and pirimicarb respectively. Different parameters such as vapor pressure, boiling point, and suspect ability of the compound to hydrolysis, could be responsible for the losing of pesticides during the cooking process. PMID:24363690
Nakamura, Hideaki; Tohyama, Kana; Tanaka, Masanori; Shinohara, Shouji; Tokunaga, Yuichi; Kurusu, Fumiyo; Koide, Satoshi; Gotoh, Masao; Karube, Isao
2007-12-15
A package-free transparent disposable biosensor chip was developed by a screen-printing technique. The biosensor chip was fabricated by stacking a substrate with two carbon electrodes on its surface, a spacer consisting of a resist layer and an adhesive layer, and a cover. The structure of the chip keeps the interior of the reaction-detecting section airtight until use. The chip is equipped with double electrochemical measuring elements for the simultaneous measurement of multiple items, and the reagent layer was developed in sample-feeding path. The sample-inlet port and air-discharge port are simultaneously opened by longitudinally folding in two biosensor units with a notch as a boundary. Then the shape of the chip is changed to a V-shape. The reaction-detecting section of the chip has a 1.0 microl sample volume for one biosensor unit. Excellent results were obtained with the chip in initial simultaneous chronoamperometric measurements of both glucose (r=1.00) and lactate (r=0.998) in the same samples. The stability of the enzyme sensor signals of the chip was estimated at ambient atmosphere on 8 testing days during a 6-month period. The results were compared with those obtained for an unpackaged chip used as a control. The package-free chip proved to be twice as good as the control chip in terms of the reproducibility of slopes from 16 calibration curves (one calibration curve: 0, 100, 300, 500 mg dl(-1) glucose; n=3) and 4.6 times better in terms of the reproducibility of correlation coefficients from the 16 calibration curves.
Testing a dual-fluorescence assay to monitor the viability of filamentous cyanobacteria.
Johnson, Tylor J; Hildreth, Michael B; Gu, Liping; Zhou, Ruanbao; Gibbons, William R
2015-06-01
Filamentous cyanobacteria are currently being engineered to produce long-chain organic compounds, including 3rd generation biofuels. Because of their filamentous morphology, standard methods to quantify viability (e.g., plate counts) are not possible. This study investigated a dual-fluorescence assay based upon the LIVE/DEAD® BacLight™ Bacterial Viability Kit to quantify the percent viability of filamentous cyanobacteria using a microplate reader in a high throughput 96-well plate format. The manufacturer's protocol calls for an optical density normalization step to equalize the numbers of viable and non-viable cells used to generate calibration curves. Unfortunately, the isopropanol treatment used to generate non-viable cells released a blue pigment that altered absorbance readings of the non-viable cell solution, resulting in an inaccurate calibration curve. Thus we omitted this optical density normalization step, and carefully divided cell cultures into two equal fractions before the isopropanol treatment. While the resulting calibration curves had relatively high correlation coefficients, their use in various experiments resulted in viability estimates ranging from below 0% to far above 100%. We traced this to the apparent inaccuracy of the propidium iodide (PI) dye that was to stain only non-viable cells. Through further analysis via microplate reader, as well as confocal and wide-field epi-fluorescence microscopy, we observed non-specific binding of PI in viable filamentous cyanobacteria. While PI will not work for filamentous cyanobacteria, it is possible that other fluorochrome dyes could be used to selectively stain non-viable cells. This will be essential in future studies for screening mutants and optimizing photobioreactor system performance for filamentous cyanobacteria. Copyright © 2015 Elsevier B.V. All rights reserved.
Gil, Jeovanis; Cabrales, Ania; Reyes, Osvaldo; Morera, Vivian; Betancourt, Lázaro; Sánchez, Aniel; García, Gerardo; Moya, Galina; Padrón, Gabriel; Besada, Vladimir; González, Luis Javier
2012-02-23
Growth hormone-releasing peptide 6 (GHRP-6, His-(DTrp)-Ala-Trp-(DPhe)-Lys-NH₂, MW=872.44 Da) is a potent growth hormone secretagogue that exhibits a cytoprotective effect, maintaining tissue viability during acute ischemia/reperfusion episodes in different organs like small bowel, liver and kidneys. In the present work a quantitative method to analyze GHRP-6 in human plasma was developed and fully validated following FDA guidelines. The method uses an internal standard (IS) of GHRP-6 with ¹³C-labeled Alanine for quantification. Sample processing includes a precipitation step with cold acetone to remove the most abundant plasma proteins, recovering the GHRP-6 peptide with a high yield. Quantification was achieved by LC-MS in positive full scan mode in a Q-Tof mass spectrometer. The sensitivity of the method was evaluated, establishing the lower limit of quantification at 5 ng/mL and a range for the calibration curve from 5 ng/mL to 50 ng/mL. A dilution integrity test was performed to analyze samples at higher concentration of GHRP-6. The validation process involved five calibration curves and the analysis of quality control samples to determine accuracy and precision. The calibration curves showed R² higher than 0.988. The stability of the analyte and its internal standard (IS) was demonstrated in all conditions the samples would experience in a real time analyses. This method was applied to the quantification of GHRP-6 in plasma from nine healthy volunteers participating in a phase I clinical trial. Copyright © 2011 Elsevier B.V. All rights reserved.
Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine
2017-03-01
Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.
Saito, Rena; Park, Ju-Hyeong; LeBouf, Ryan; Green, Brett J.; Park, Yeonmi
2017-01-01
Gas chromatography-tandem mass spectrometry (GC-MS/MS) was used to detect fungal secondary metabolites. Detection of verrucarol, the hydrolysis product of Stachybotrys chartarum macrocyclic trichothecene (MCT), was confounded by matrix effects associated with heterogeneous indoor environmental samples. In this study, we examined the role of dust matrix effects associated with GC-MS/ MS to better quantify verrucarol in dust as a measure of total MCT. The efficiency of the internal standard (ISTD, 1,12-dodecanediol), and application of a matrix-matched standard correction method in measuring MCT in floor dust of water-damaged buildings was additionally examined. Compared to verrucarol, ISTD had substantially higher matrix effects in the dust extracts. The results of the ISTD evaluation showed that without ISTD adjustment, there was a 280% ion enhancement in the dust extracts compared to neat solvent. The recovery of verrucarol was 94% when the matrix-matched standard curve without the ISTD was used. Using traditional calibration curves with ISTD adjustment, none of the 21 dust samples collected from water damaged buildings were detectable. In contrast, when the matrix-matched calibration curves without ISTD adjustment were used, 38% of samples were detectable. The study results suggest that floor dust of water-damaged buildings may contain MCT. However, the measured levels of MCT in dust using the GC-MS/MS method could be significantly under- or overestimated, depending on the matrix effects, the inappropriate ISTD, or combination of the two. Our study further shows that the routine application of matrix-matched calibration may prove useful in obtaining accurate measurements of MCT in dust derived from damp indoor environments, while no isotopically labeled verrucarol is available. PMID:26853932
Saito, Rena; Park, Ju-Hyeong; LeBouf, Ryan; Green, Brett J; Park, Yeonmi
2016-01-01
Gas chromatography-tandem mass spectrometry (GC-MS/MS) was used to detect fungal secondary metabolites. Detection of verrucarol, the hydrolysis product of Stachybotrys chartarum macrocyclic trichothecene (MCT), was confounded by matrix effects associated with heterogeneous indoor environmental samples. In this study, we examined the role of dust matrix effects associated with GC-MS/MS to better quantify verrucarol in dust as a measure of total MCT. The efficiency of the internal standard (ISTD, 1,12-dodecanediol), and application of a matrix-matched standard correction method in measuring MCT in floor dust of water-damaged buildings was additionally examined. Compared to verrucarol, ISTD had substantially higher matrix effects in the dust extracts. The results of the ISTD evaluation showed that without ISTD adjustment, there was a 280% ion enhancement in the dust extracts compared to neat solvent. The recovery of verrucarol was 94% when the matrix-matched standard curve without the ISTD was used. Using traditional calibration curves with ISTD adjustment, none of the 21 dust samples collected from water damaged buildings were detectable. In contrast, when the matrix-matched calibration curves without ISTD adjustment were used, 38% of samples were detectable. The study results suggest that floor dust of water-damaged buildings may contain MCT. However, the measured levels of MCT in dust using the GC-MS/MS method could be significantly under- or overestimated, depending on the matrix effects, the inappropriate ISTD, or combination of the two. Our study further shows that the routine application of matrix-matched calibration may prove useful in obtaining accurate measurements of MCT in dust derived from damp indoor environments, while no isotopically labeled verrucarol is available.
de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio
2011-08-05
The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data. Copyright © 2011 Elsevier B.V. All rights reserved.
Mo, Shaobo; Dai, Weixing; Xiang, Wenqiang; Li, Qingguo; Wang, Renjie; Cai, Guoxiang
2018-05-03
The objective of this study was to summarize the clinicopathological and molecular features of synchronous colorectal peritoneal metastases (CPM). We then combined clinical and pathological variables associated with synchronous CPM into a nomogram and confirmed its utilities using decision curve analysis. Synchronous metastatic colorectal cancer (mCRC) patients who received primary tumor resection and underwent KRAS, NRAS, and BRAF gene mutation detection at our center from January 2014 to September 2015 were included in this retrospective study. An analysis was performed to investigate the clinicopathological and molecular features for independent risk factors of synchronous CPM and to subsequently develop a nomogram for synchronous CPM based on multivariate logistic regression. Model performance was quantified in terms of calibration and discrimination. We studied the utility of the nomogram using decision curve analysis. In total, 226 patients were diagnosed with synchronous mCRC, of whom 50 patients (22.1%) presented with CPM. After uni- and multivariate analysis, a nomogram was built based on tumor site, histological type, age, and T4 status. The model had good discrimination with an area under the curve (AUC) at 0.777 (95% CI 0.703-0.850) and adequate calibration. By decision curve analysis, the model was shown to be relevant between thresholds of 0.10 and 0.66. Synchronous CPM is more likely to happen to patients with age ≤60, right-sided primary lesions, signet ring cell cancer or T4 stage. This is the first nomogram to predict synchronous CPM. To ensure generalizability, this model needs to be externally validated. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Classification of Uxo by Principal Dipole Polarizability
NASA Astrophysics Data System (ADS)
Kappler, K. N.
2010-12-01
Data acquired by multiple-Transmitter, multiple-receiver time-domain electromagnetic devices show great potential for determining the geometric and compositional information relating to near surface conductive targets. Here is presented an analysis of data from one such system; the Berkeley Unexploded-ordnance Discriminator (BUD) system. BUD data are succinctly reduced by processing the multi-static data matrices to obtain magnetic dipole polarizability matrices for data from each time gate. When viewed over all time gates, the projections of the data onto the principal polar axes yield so-called polarizability curves. These curves are especially well suited to discriminating between subsurface conductivity anomalies which correspond to objects of rotational symmetry and irregularly shaped objects. The curves have previously been successfully employed as library elements in a pattern recognition scheme aimed at discriminating harmless scrap metal from dangerous intact unexploded ordnance. However, previous polarizability-curve matching methods have only been applied at field sites which are known a priori to be contaminated by a single type of ordnance, and furthermore, the particular ordnance present in the subsurface was known to be large. Thus signal amplitude was a key element in the discrimination process. The work presented here applies feature-based pattern classification techniques to BUD field data where more than 20 categories of object are present. Data soundings from a calibration grid at the Yuma, AZ proving ground are used in a cross validation study to calibrate the pattern recognition method. The resultant method is then applied to a Blind Test Grid. Results indicate that when lone UXO are present and SNR is reasonably high, Polarizability Curve Matching successfully discriminates UXO from scrap metal when a broad range of objects are present.
Mortality prediction using TRISS methodology in the Spanish ICU Trauma Registry (RETRAUCI).
Chico-Fernández, M; Llompart-Pou, J A; Sánchez-Casado, M; Alberdi-Odriozola, F; Guerrero-López, F; Mayor-García, M D; Egea-Guerrero, J J; Fernández-Ortega, J F; Bueno-González, A; González-Robledo, J; Servià-Goixart, L; Roldán-Ramírez, J; Ballesteros-Sanz, M Á; Tejerina-Alvarez, E; Pino-Sánchez, F I; Homar-Ramírez, J
2016-10-01
To validate Trauma and Injury Severity Score (TRISS) methodology as an auditing tool in the Spanish ICU Trauma Registry (RETRAUCI). A prospective, multicenter registry evaluation was carried out. Thirteen Spanish Intensive Care Units (ICUs). Individuals with traumatic disease and available data admitted to the participating ICUs. Predicted mortality using TRISS methodology was compared with that observed in the pilot phase of the RETRAUCI from November 2012 to January 2015. Discrimination was evaluated using receiver operating characteristic (ROC) curves and the corresponding areas under the curves (AUCs) (95% CI), with calibration using the Hosmer-Lemeshow (HL) goodness-of-fit test. A value of p<0.05 was considered significant. Predicted and observed mortality. A total of 1405 patients were analyzed. The observed mortality rate was 18% (253 patients), while the predicted mortality rate was 16.9%. The area under the ROC curve was 0.889 (95% CI: 0.867-0.911). Patients with blunt trauma (n=1305) had an area under the ROC curve of 0.887 (95% CI: 0.864-0.910), and those with penetrating trauma (n=100) presented an area under the curve of 0.919 (95% CI: 0.859-0.979). In the global sample, the HL test yielded a value of 25.38 (p=0.001): 27.35 (p<0.0001) in blunt trauma and 5.91 (p=0.658) in penetrating trauma. TRISS methodology underestimated mortality in patients with low predicted mortality and overestimated mortality in patients with high predicted mortality. TRISS methodology in the evaluation of severe trauma in Spanish ICUs showed good discrimination, with inadequate calibration - particularly in blunt trauma. Copyright © 2015 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Data on fossil fuel availability for Shared Socioeconomic Pathways.
Bauer, Nico; Hilaire, Jérôme; Brecha, Robert J; Edmonds, Jae; Jiang, Kejun; Kriegler, Elmar; Rogner, Hans-Holger; Sferra, Fabio
2017-02-01
The data files contain the assumptions and results for the construction of cumulative availability curves for coal, oil and gas for the five Shared Socioeconomic Pathways. The files include the maximum availability (also known as cumulative extraction cost curves) and the assumptions that are applied to construct the SSPs. The data is differentiated into twenty regions. The resulting cumulative availability curves are plotted and the aggregate data as well as cumulative availability curves are compared across SSPs. The methodology, the data sources and the assumptions are documented in a related article (N. Bauer, J. Hilaire, R.J. Brecha, J. Edmonds, K. Jiang, E. Kriegler, H.-H. Rogner, F. Sferra, 2016) [1] under DOI: http://dx.doi.org/10.1016/j.energy.2016.05.088.
Measurement and Calibration of PSD with Phase-shifting Interferometers
NASA Technical Reports Server (NTRS)
Lehan, J. P.
2008-01-01
We discuss the instrumental aspects affecting the measurement accuracy when determining PSD with phase shifting interferometers. These include the source coherence, optical train effects, and detector effects. The use of a carefully constructed calibration standard will also be discussed. We will end with a recommended measurement and data handling procedure.
Strzelak, Kamil; Rybkowska, Natalia; Wiśniewska, Agnieszka; Koncki, Robert
2017-12-01
The Multicommutated Flow Analysis (MCFA) system for the estimation of clinical iron parameters: Serum Iron (SI), Unsaturated Iron Binding Capacity (UIBC) and Total Iron Binding Capacity (TIBC) has been proposed. The developed MCFA system based on simple photometric detection of iron with chromogenic agent (ferrozine) enables a speciation of transferrin (determination of free and Fe-bound protein) in human serum. The construction of manifold was adapted to the requirements of measurements under changing conditions. In the course of studies, a different effect of proteins on SI and UIBC determination has been proven. That was in turn the reason to perform two kinds of calibration methods. For measurements in acidic medium for SI/holotransferrin determination, the calibration curve method was applied, characterized by limit of determination and limit of quantitation on the level of 3.4 μmol L -1 and 9.1 μmol L -1 , respectively. The determination method for UIBC parameter (related to apotransferrin level) in physiological medium of pH 7.4 forced the use of standard addition method due to the strong influence of proteins on obtaining analytical signals. These two different methodologies, performed in the presented system, enabled the estimation of all three clinical iron/transferrin parameters in human serum samples. TIBC corresponding to total transferrin level was calculated as a sum of SI and UIBC. Copyright © 2017 Elsevier B.V. All rights reserved.
Advances in iterative non-uniformity correction techniques for infrared scene projection
NASA Astrophysics Data System (ADS)
Danielson, Tom; Franks, Greg; LaVeigne, Joe; Prewarski, Marcus; Nehring, Brian
2015-05-01
Santa Barbara Infrared (SBIR) is continually developing improved methods for non-uniformity correction (NUC) of its Infrared Scene Projectors (IRSPs) as part of its comprehensive efforts to achieve the best possible projector performance. The most recent step forward, Advanced Iterative NUC (AI-NUC), improves upon previous NUC approaches in several ways. The key to NUC performance is achieving the most accurate possible input drive-to-radiance output mapping for each emitter pixel. This requires many highly-accurate radiance measurements of emitter output, as well as sophisticated manipulation of the resulting data set. AI-NUC expands the available radiance data set to include all measurements made of emitter output at any point. In addition, it allows the user to efficiently manage that data for use in the construction of a new NUC table that is generated from an improved fit of the emitter response curve. Not only does this improve the overall NUC by offering more statistics for interpolation than previous approaches, it also simplifies the removal of erroneous data from the set so that it does not propagate into the correction tables. AI-NUC is implemented by SBIR's IRWindows4 automated test software as part its advanced turnkey IRSP product (the Calibration Radiometry System or CRS), which incorporates all necessary measurement, calibration and NUC table generation capabilities. By employing AI-NUC on the CRS, SBIR has demonstrated the best uniformity results on resistive emitter arrays to date.
Chen, Ze-yong; Peng, Rong-fei; Zhang, Zhan-xia
2002-06-01
An atomic emission spectrometer based on acousto-optic tunable filter (AOTF) was self-constructed and was used to evaluate its practical use in atomic emission analysis. The AOTF used was of model TEAF5-0.36-0.52-S (Brimrose, USA) and the frequency of the direct digital RF synthesizer ranges from 100 MHz to 200 MHz. ICP and PMT were used as light source and detector respectively. The software, written in Visual C++ and running on the Windows 98 platform, is of an utility program system having two data banks and multiwindows. The wavelength calibration was performed with 14 emission lines of Ca, Y, Li, Eu, Sr and Ba using a tenth-order polynomial for line fitting method. The absolute error of the peak position was less than 0.1 nm, and the peak deviation was only 0.04 nm as the PMT varied from 337.5 V to 412.5 V. The scanning emission spectra and the calibration curves of Ba, Y, Eu, Sc and Sr are presented. Their average correlation coefficient was 0.9991 and their detection limits were in the range of 0.051 to 0.97 micrograms.mL-1 respectively. The detection limit can be improved under optimized operating conditions. However, the spectral resolution is only 2.1 nm at the wavelength of 488 nm. Evidently, this poor spectral resolution would restrict the application of AOTF in atomic emission spectral analysis, unless an enhancing techniques is integrated in it.
Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels
NASA Technical Reports Server (NTRS)
Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.
2004-01-01
Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.
Studying Reliability Using Identical Handheld Lactate Analyzers
ERIC Educational Resources Information Center
Stewart, Mark T.; Stavrianeas, Stasinos
2008-01-01
Accusport analyzers were used to generate lactate performance curves in an investigative laboratory activity emphasizing the importance of reliable instrumentation. Both the calibration and testing phases of the exercise provided students with a hands-on opportunity to use laboratory-grade instrumentation while allowing for meaningful connections…
DOT National Transportation Integrated Search
1997-06-01
This report presents: (1) calculation of flood frequency for the Ward Creek watershed using eight flood prediction models, (2) establishment of the rating curve (stage-discharge relation) for the Ward Creek watershed, (3) evaluation of these flood pr...
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue
2017-07-01
Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.
Minimizing thermal degradation in gas chromatographic quantitation of pentaerythritol tetranitrate.
Lubrano, Adam L; Field, Christopher R; Newsome, G Asher; Rogers, Duane A; Giordano, Braden C; Johnson, Kevin J
2015-05-15
An analytical method for establishing calibration curves for the quantitation of pentaerythriol tetranitrate (PETN) from sorbent-filled thermal desorption tubes by gas chromatography with electron capture detection (TDS-GC-ECD) was developed. As PETN has been demonstrated to thermally degrade under typical GC instrument conditions, peaks corresponding to both PETN degradants and molecular PETN are observed. The retention time corresponding to intact PETN was verified by high-resolution mass spectrometry with a flowing atmospheric pressure afterglow (FAPA) ionization source, which enabled soft ionization of intact PETN eluting the GC and subsequent accurate-mass identification. The GC separation parameters were transferred to a conventional GC-ECD instrument where analytical method-induced PETN degradation was further characterized and minimized. A method calibration curve was established by direct liquid deposition of PETN standard solutions onto the glass frit at the head of sorbent-filled thermal desorption tubes. Two local, linear relationships between detector response and PETN concentration were observed, with a total dynamic range of 0.25-25ng. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Chen, Gang; Chen, Yanping; Zheng, Xiongwei; He, Cheng; Lu, Jianping; Feng, Shangyuan; Chen, Rong; Zeng, Haisan
2013-12-01
In this work, we developed a SERS platform for quantitative detection of carcinoembryonic antigen (CEA) in serum of patients with colorectal cancers. Anti-CEA-functionalized 4-mercaptobenzoic acid-labeled Au/Ag core-shell bimetallic nanoparticles were prepared first and then used to analyze CEA antigen solutions of different concentrations. A calibration curve was established in the range from 5 × 10-3 to 5 × 105 ng/mL. Finally, this new SERS probe was applied for quantitative detection of CEA in serum obtained from 26 colorectal cancer patients according to the calibration curve. The results were in good agreement with that obtained by electrochemical luminescence method, suggesting that SERS immunoassay has high sensitivity and specificity for CEA detection in serum. A detection limit of 5 pg/ml was achieved. This study demonstrated the feasibility and great potential for developing this new technology into a clinical tool for analysis of tumor markers in the blood.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Liu, Yongliang; Thibodeaux, Devron; Gamble, Gary; Bauer, Philip; VanDerveer, Don
2012-08-01
Despite considerable efforts in developing curve-fitting protocols to evaluate the crystallinity index (CI) from X-ray diffraction (XRD) measurements, in its present state XRD can only provide a qualitative or semi-quantitative assessment of the amounts of crystalline or amorphous fraction in a sample. The greatest barrier to establishing quantitative XRD is the lack of appropriate cellulose standards, which are needed to calibrate the XRD measurements. In practice, samples with known CI are very difficult to prepare or determine. In a previous study, we reported the development of a simple algorithm for determining fiber crystallinity information from Fourier transform infrared (FT-IR) spectroscopy. Hence, in this study we not only compared the fiber crystallinity information between FT-IR and XRD measurements, by developing a simple XRD algorithm in place of a time-consuming and subjective curve-fitting process, but we also suggested a direct way of determining cotton cellulose CI by calibrating XRD with the use of CI(IR) as references.
Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene
2011-10-28
This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.
Sochor, Jiri; Ryvolova, Marketa; Krystofova, Olga; Salas, Petr; Hubalek, Jaromir; Adam, Vojtech; Trnkova, Libuse; Havel, Ladislav; Beklova, Miroslava; Zehnalek, Josef; Provaznik, Ivo; Kizek, Rene
2010-11-29
The aim of this study was to describe behaviour, kinetics, time courses and limitations of the six different fully automated spectrometric methods--DPPH, TEAC, FRAP, DMPD, Free Radicals and Blue CrO5. Absorption curves were measured and absorbance maxima were found. All methods were calibrated using the standard compounds Trolox® and/or gallic acid. Calibration curves were determined (relative standard deviation was within the range from 1.5 to 2.5%). The obtained characteristics were compared and discussed. Moreover, the data obtained were applied to optimize and to automate all mentioned protocols. Automatic analyzer allowed us to analyse simultaneously larger set of samples, to decrease the measurement time, to eliminate the errors and to provide data of higher quality in comparison to manual analysis. The total time of analysis for one sample was decreased to 10 min for all six methods. In contrary, the total time of manual spectrometric determination was approximately 120 min. The obtained data provided good correlations between studied methods (R=0.97-0.99).
Limits of detection and decision. Part 3
NASA Astrophysics Data System (ADS)
Voigtman, E.
2008-02-01
It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on 'critical values of the non-centrality parameter of the non-central t distribution', is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, "hockey stick" and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.
Sensitivity curves for searches for gravitational-wave backgrounds
NASA Astrophysics Data System (ADS)
Thrane, Eric; Romano, Joseph D.
2013-12-01
We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.
Chun, Hao-Jung; Poklis, Justin L.; Poklis, Alphonse; Wolf, Carl E.
2016-01-01
Ethanol is the most widely used and abused drug. While blood is the preferred specimen for analysis, tissue specimens such as brain serve as alternative specimens for alcohol analysis in post-mortem cases where blood is unavailable or contaminated. A method was developed using headspace gas chromatography with flame ionization detection (HS-GC-FID) for the detection and quantification of ethanol, acetone, isopropanol, methanol and n-propanol in brain tissue specimens. Unfixed volatile-free brain tissue specimens were obtained from the Department of Pathology at Virginia Commonwealth University. Calibrators and controls were prepared from 4-fold diluted homogenates of these brain tissue specimens, and were analyzed using t-butanol as the internal standard. The chromatographic separation was performed with a Restek BAC2 column. A linear calibration was generated for all analytes (mean r2 > 0.9992) with the limits of detection and quantification of 100–110 mg/kg. Matrix effect from the brain tissue was determined by comparing the slopes of matrix prepared calibration curves with those of aqueous calibration curves; no significant differences were observed for ethanol, acetone, isopropanol, methanol and n-propanol. The bias and the CVs for all volatile controls were ≤10%. The method was also evaluated for carryover, selectivity, interferences, bench-top stability and freeze-thaw stability. The HS-GC-FID method was determined to be reliable and robust for the analysis of ethanol, acetone, isopropanol, methanol and n-propanol concentrations in brain tissue, effectively expanding the specimen options for post-mortem alcohol analysis. PMID:27488829
Tsuchida, Satoshi; Thome, Kurtis
2017-01-01
Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329
A New Approach to the Internal Calibration of Reverberation-Mapping Spectra
NASA Astrophysics Data System (ADS)
Fausnaugh, M. M.
2017-02-01
We present a new procedure for the internal (night-to-night) calibration of timeseries spectra, with specific applications to optical AGN reverberation mapping data. The traditional calibration technique assumes that the narrow [O iii] λ5007 emission-line profile is constant in time; given a reference [O iii] λ5007 line profile, nightly spectra are aligned by fitting for a wavelength shift, a flux rescaling factor, and a change in the spectroscopic resolution. We propose the following modifications to this procedure: (1) we stipulate a constant spectral resolution for the final calibrated spectra, (2) we employ a more flexible model for changes in the spectral resolution, and (3) we use a Bayesian modeling framework to assess uncertainties in the calibration. In a test case using data for MCG+08-11-011, these modifications result in a calibration precision of ˜1 millimagnitude, which is approximately a factor of five improvement over the traditional technique. At this level, other systematic issues (e.g., the nightly sensitivity functions and Feii contamination) limit the final precision of the observed light curves. We implement this procedure as a python package (mapspec), which we make available to the community.
An arm phantom for in vivo determination of Americium-241 in bone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kephart, G.S.; Palmer, H.E.
1988-04-01
The focus of this research has been to construct a realistic arm phantom as a calibration tool in estimation of /sup 241/Am in the bone. The United States Transuranium Registry (USTR), through its program of whole body donations, continues to provide data on transuranic incorporation in man that would not otherwise be readily available (Norwood 1972; Breitenstein 1981; Swint, et al. 1985). This project uses well-characterized human bones loaned by the USTR for the construction of realistic phantoms for improvement of whole-body counter calibrations. 27 refs., 2 figs., 4 tabs.
Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario
2015-01-01
Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.
Tunable lasers for water vapor measurements and other lidar applications
NASA Technical Reports Server (NTRS)
Gammon, R. W.; Mcilrath, T. J.; Wilkerson, T. D.
1977-01-01
A tunable dye laser suitable for differential absorption (DIAL) measurements of water vapor in the troposphere was constructed. A multi-pass absorption cell for calibration was also constructed for use in atmospheric DIAL measurements of water vapor.
VizieR Online Data Catalog: SNLS and SDSS SN surveys photometric calibration (Betoule+, 2013)
NASA Astrophysics Data System (ADS)
Betoule, M.; Marriner, J.; Regnault, N.; Cuillandre, J.-C.; Astier, P.; Guy, J.; Balland, C.; El, Hage P.; Hardin, D.; Kessler, R.; Le Guillou, L.; Mosher, J.; Pain, R.; Rocci, P.-F.; Sako, M.; Schahmaneche, K.
2012-11-01
We present a joined photometric calibration for the SNLS and the SDSS supernova surveys. Our main delivery are catalogs of natural AB magnitudes for a large set of selected tertiary standard stars covering the science field of both surveys. Those catalogs are calibrated to the AB flux scale through observations of 5 primary spectrophotometric standard stars, for which HST-STIS spectra are available in the CALSPEC database. The estimate of the uncertainties associated to this calibration are delivered as a single covariance matrix. We also provide a model of the transmission efficiency of the SNLS photometric instrument MegaCam. Those transmission functions are required for the interpretation of MegaCam natural magnitudes in term of physical fluxes. Similar curves for the SDSS photometric instrument have been published in Doi et al. (2010AJ....139.1628D). Last, we release the measured magnitudes of the five CALSPEC standard stars in the magnitude system of the tertiary catalogs. This makes it possible to update the calibration of the tertiary catalogs if CALSPEC spectra for the primary standards are revised. (11 data files).
Calibration of Solar Radio Spectrometer of the Purple Mountain Observatory
NASA Astrophysics Data System (ADS)
Lei, LU; Si-ming, LIU; Qi-wu, SONG; Zong-jun, NING
2015-10-01
Calibration is a basic and important job in solar radio spectral observations. It not only deduces the solar radio flux as an important physical quantity for solar observations, but also deducts the flat field of the radio spectrometer to display the radio spectrogram clearly. In this paper, we first introduce the basic method of calibration based on the data of the solar radio spectrometer of Purple Mountain Observatory. We then analyze the variation of the calibration coefficients, and give the calibrated results for a few flares. These results are compared with those of the Nobeyama solar radio polarimeter and the hard X-ray observations of the RHESSI (Reuven Ramaty High Energy Solar Spectroscopic Imager) satellite, it is shown that these results are consistent with the characteristics of typical solar flare light curves. In particular, the analysis on the correlation between the variation of radio flux and the variation of hard X-ray flux in the pulsing phase of a flare indicates that these observations can be used to study the relevant radiation mechanism, as well as the related energy release and particle acceleration processes.