Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
NASA Technical Reports Server (NTRS)
Demoss, J. F. (Compiler)
1971-01-01
Calibration curves for the Apollo 16 command service module pulse code modulation downlink and onboard display are presented. Subjects discussed are: (1) measurement calibration curve format, (2) measurement identification, (3) multi-mode calibration data summary, (4) pulse code modulation bilevel events listing, and (5) calibration curves for instrumentation downlink and meter link.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Balss, K M; Llanos, G; Papandreou, G; Maryanoff, C A
2008-04-01
Raman spectroscopy was used to differentiate each component found in the CYPHER Sirolimus-eluting Coronary Stent. The unique spectral features identified for each component were then used to develop three separate calibration curves to describe the solid phase distribution found on drug-polymer coated stents. The calibration curves were obtained by analyzing confocal Raman spectral depth profiles from a set of 16 unique formulations of drug-polymer coatings sprayed onto stents and planar substrates. The sirolimus model was linear from 0 to 100 wt % of drug. The individual polymer calibration curves for poly(ethylene-co-vinyl acetate) [PEVA] and poly(n-butyl methacrylate) [PBMA] were also linear from 0 to 100 wt %. The calibration curves were tested on three independent drug-polymer coated stents. The sirolimus calibration predicted the drug content within 1 wt % of the laboratory assay value. The polymer calibrations predicted the content within 7 wt % of the formulation solution content. Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra from five formulations confirmed a linear response to changes in sirolimus and polymer content. Copyright 2007 Wiley Periodicals, Inc.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
40 CFR 90.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... the form of the following equation (1) or (2). Include zero as a data point. Compensation for known...
Dried blood spot analysis of creatinine with LC-MS/MS in addition to immunosuppressants analysis.
Koster, Remco A; Greijdanus, Ben; Alffenaar, Jan-Willem C; Touw, Daan J
2015-02-01
In order to monitor creatinine levels or to adjust the dosage of renally excreted or nephrotoxic drugs, the analysis of creatinine in dried blood spots (DBS) could be a useful addition to DBS analysis. We developed a LC-MS/MS method for the analysis of creatinine in the same DBS extract that was used for the analysis of tacrolimus, sirolimus, everolimus, and cyclosporine A in transplant patients with the use of Whatman FTA DMPK-C cards. The method was validated using three different strategies: a seven-point calibration curve using the intercept of the calibration to correct for the natural presence of creatinine in reference samples, a one-point calibration curve at an extremely high concentration in order to diminish the contribution of the natural presence of creatinine, and the use of creatinine-[(2)H3] with an eight-point calibration curve. The validated range for creatinine was 120 to 480 μmol/L (seven-point calibration curve), 116 to 7000 μmol/L (1-point calibration curve), and 1.00 to 400.0 μmol/L for creatinine-[(2)H3] (eight-point calibration curve). The precision and accuracy results for all three validations showed a maximum CV of 14.0% and a maximum bias of -5.9%. Creatinine in DBS was found stable at ambient temperature and 32 °C for 1 week and at -20 °C for 29 weeks. Good correlations were observed between patient DBS samples and routine enzymatic plasma analysis and showed the capability of the DBS method to be used as an alternative for creatinine plasma measurement.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
Numerical simulations of flow fields through conventionally controlled wind turbines & wind farms
NASA Astrophysics Data System (ADS)
Emre Yilmaz, Ali; Meyers, Johan
2014-06-01
In the current study, an Actuator-Line Model (ALM) is implemented in our in-house pseudo-spectral LES solver SP-WIND, including a turbine controller. Below rated wind speed, turbines are controlled by a standard-torque-controller aiming at maximum power extraction from the wind. Above rated wind speed, the extracted power is limited by a blade pitch controller which is based on a proportional-integral type control algorithm. This model is used to perform a series of single turbine and wind farm simulations using the NREL 5MW turbine. First of all, we focus on below-rated wind speed, and investigate the effect of the farm layout on the controller calibration curves. These calibration curves are expressed in terms of nondimensional torque and rotational speed, using the mean turbine-disk velocity as reference. We show that this normalization leads to calibration curves that are independent of wind speed, but the calibration curves do depend on the farm layout, in particular for tightly spaced farms. Compared to turbines in a lone-standing set-up, turbines in a farm experience a different wind distribution over the rotor due to the farm boundary-layer interaction. We demonstrate this for fully developed wind-farm boundary layers with aligned turbine arrangements at different spacings (5D, 7D, 9D). Further we also compare calibration curves obtained from full farm simulations with calibration curves that can be obtained at a much lower cost using a minimal flow unit.
Seon, C R; Hong, J H; Jang, J; Lee, S H; Choe, W; Lee, H H; Cheon, M S; Pak, S; Lee, H G; Biel, W; Barnsley, R
2014-11-01
To optimize the design of ITER vacuum ultraviolet (VUV) spectrometer, a prototype VUV spectrometer was developed. The sensitivity calibration curve of the spectrometer was calculated from the mirror reflectivity, the grating efficiency, and the detector efficiency. The calibration curve was consistent with the calibration points derived in the experiment using the calibrated hollow cathode lamp. For the application of the prototype ITER VUV spectrometer, the prototype spectrometer was installed at KSTAR, and various impurity emission lines could be measured. By analyzing about 100 shots, strong positive correlation between the O VI and the C IV emission intensities could be found.
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
1989-09-01
Itterconnection wiring diagram for the ESA ............................... 34 3-13 Typical gain versus total count curve for CEM...42 3-16 Calibration curve for energy bin 12 of the ion ESA ....................... 43 3-17 Flight ESA S/N001...Calibration curves for SPM S/N001 ......................................... 67 4-11 Calibration curves for SPM S/N002
Rondeau, M; Rouleau, M
1981-06-01
Using semen from bull, boar and stallion as well as different spectrophotometers, we established the calibration curves relating the optical density of a sperm sample to the sperm count obtained on the hemacytometer. The results show that, for a given spectrophotometer, the calibration curve is not characteristic of the animal species we studied. The differences in size of the spermatozoa are probably too small to account for the anticipated specificity of the calibration curve. Furthermore, the fact that different dilution rates must be used, because of the vastly different concentrations of spermatozoa which is characteristic of those species, has no effect on the calibration curves since the dilution rate is shown to be artefactual. On the other hand, for a given semen, the calibration curve varies depending upon the spectrophotometry used. However, if two instruments have the same characteristic in terms of spectral bandwidth, the calibration curves are not statistically different.
GIADA: extended calibration activity: . the Electrostatic Micromanipulator
NASA Astrophysics Data System (ADS)
Sordini, R.; Accolla, M.; Della Corte, V.; Rotundi, A.
GIADA (Grain Impact Analyser and Dust Accumulator), one of the scientific instruments onboard Rosetta/ESA space mission, is devoted to study dynamical properties of dust particles ejected by the short period comet 67P/Churyumov-Gerasimenko. In preparation for the scientific phase of the mission, we are performing laboratory calibration activities on the GIADA Proto Flight Model (PFM), housed in a clean room in our laboratory. Aim of the calibration activity is to characterize the response curve of the GIADA measurement sub-systems. These curves are then correlated with the calibration curves obtained for the GIADA payload onboard the Rosetta S/C. The calibration activity involves two of three sub-systems constituting GIADA: Grain Detection System (GDS) and Impact Sensor (IS). To get reliable calibration curves, a statistically relevant number of grains have to be dropped or shot into the GIADA instrument. Particle composition, structure, size, optical properties and porosity have been selected in order to obtain realistic cometary dust analogues. For each selected type of grain, we estimated that at least one hundred of shots are needed to obtain a calibration curve. In order to manipulate such a large number of particles, we have designed and developed an innovative electrostatic system able to capture, manipulate and shoot particles with sizes in the range 20 - 500 μm. The electrostatic Micromanipulator (EM) is installed on a manual handling system composed by X-Y-Z micrometric slides with a 360o rotational stage along Z, and mounted on a optical bench. In the present work, we display the tests on EM using ten different materials with dimension in the range 50 - 500 μm: the experimental results are in compliance with the requirements.
NASA Astrophysics Data System (ADS)
Jiang, Shyh-Biau; Yeh, Tse-Liang; Chen, Li-Wu; Liu, Jann-Yenq; Yu, Ming-Hsuan; Huang, Yu-Qin; Chiang, Chen-Kiang; Chou, Chung-Jen
2018-05-01
In this study, we construct a photomultiplier calibration system. This calibration system can help scientists measuring and establishing the characteristic curve of the photon count versus light intensity. The system uses an innovative 10-fold optical attenuator to enable an optical power meter to calibrate photomultiplier tubes which have the resolution being much greater than that of the optical power meter. A simulation is firstly conducted to validate the feasibility of the system, and then the system construction, including optical design, circuit design, and software algorithm, is realized. The simulation generally agrees with measurement data of the constructed system, which are further used to establish the characteristic curve of the photon count versus light intensity.
Refinement of moisture calibration curves for nuclear gage : interim report no. 1.
DOT National Transportation Integrated Search
1972-01-01
This study was initiated to determine the correct moisture calibration curves for different nuclear gages. It was found that the Troxler Model 227 had a linear response between count ratio and moisture content. Also, the two calibration curves for th...
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-01-01
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-12-28
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.
Preliminary calibration of the ACP safeguards neutron counter
NASA Astrophysics Data System (ADS)
Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.
2007-10-01
The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.
ERIC Educational Resources Information Center
Anderson Koenig, Judith; Roberts, James S.
2007-01-01
Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…
Calibration of the Concorde radiation detection instrument and measurements at SST altitude.
DOT National Transportation Integrated Search
1971-06-01
Performance tests were carried out on a solar cosmic radiation detection instrument developed for the Concorde SST. The instrument calibration curve (log dose-rate vs instrument reading) was reasonably linear from 0.004 to 1 rem/hr for both gamma rad...
2017-11-01
sent from light-emitting diodes (LEDs) of 5 colors ( green , red, white, amber, and blue). Experiment 1 involved controlled laboratory measurements of...A-4 Red LED calibration curves and quadratic curve fits with R2 values . 37 Fig. A-5 Green LED calibration curves and quadratic curve fits with R2...36 Table A-4 Red LED calibration measurements ................................................... 36 Table A-5 Green LED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Z; Reyhan, M; Huang, Q
Purpose: The calibration of the Hounsfield units (HU) to relative proton stopping powers (RSP) is a crucial component in assuring the accurate delivery of proton therapy dose distributions to patients. The purpose of this work is to assess the uncertainty of CT calibration considering the impact of CT slice thickness, position of the plug within the phantom and phantom sizes. Methods: Stoichiometric calibration method was employed to develop the CT calibration curve. Gammex 467 tissue characterization phantom was scanned in Tomotherapy Cheese phantom and Gammex 451 phantom by using a GE CT scanner. Each plug was individually inserted into themore » same position of inner and outer ring of phantoms at each time, respectively. 1.25 mm and 2.5 mm slice thickness were used. Other parameters were same. Results: HU of selected human tissues were calculated based on fitted coefficient (Kph, Kcoh and KKN), and RSP were calculated according to the Bethe-Bloch equation. The calibration curve was obtained by fitting cheese phantom data with 1.25 mm thickness. There is no significant difference if the slice thickness, phantom size, position of plug changed in soft tissue. For boney structure, RSP increases up to 1% if the phantom size and the position of plug changed but keep the slice thickness the same. However, if the slice thickness varied from the one in the calibration curve, 0.5%–3% deviation would be expected depending on the plug position. The Inner position shows the obvious deviation (averagely about 2.5%). Conclusion: RSP shows a clinical insignificant deviation in soft tissue region. Special attention may be required when using a different slice thickness from the calibration curve for boney structure. It is clinically practical to address 3% deviation due to different thickness in the definition of clinical margins.« less
A new form of the calibration curve in radiochromic dosimetry. Properties and results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; B
Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve hasmore » been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.« less
A new form of the calibration curve in radiochromic dosimetry. Properties and results.
Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio
2016-07-01
This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer-Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.
Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.
2018-01-01
This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.
Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary
2018-08-01
Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.
New approach to calibrating bed load samplers
Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.
1985-01-01
Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.
Crash prediction modeling for curved segments of rural two-lane two-way highways in Utah.
DOT National Transportation Integrated Search
2015-10-01
This report contains the results of the development of crash prediction models for curved segments of rural : two-lane two-way highways in the state of Utah. The modeling effort included the calibration of the predictive : model found in the Highway ...
Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys
NASA Astrophysics Data System (ADS)
Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.
2016-08-01
Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.
Switzer, P.; Harden, J.W.; Mark, R.K.
1988-01-01
A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.
Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN
2010-08-03
A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.
Development of a sensitive monitor for hydrazine
NASA Technical Reports Server (NTRS)
Eiceman, Gary A.; Limero, Thomas; James, John T.
1991-01-01
The development of hand-held, ambient-temperature instruments that utilize ion mobility spectrometry (IMS) in the detection of hydrazine and monomethylhydrazine is reviewed. A development effort to eliminate ammonia interference through altering the ionization chemistry, based on adding 5-nonanone as dopant in the ionization region of the IMS, is presented. Calibration of this instrument conducted before and after STS-37 revealed no more than a 5 percent difference between calibration curves, without any appreciable loss of equipment function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE
2015-06-15
Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to improve dose delivery by optimizing the HU-RSP calibration curve as long as all sources of systematic incongruence are properly modeled.« less
Calibration of thermocouple psychrometers and moisture measurements in porous materials
NASA Astrophysics Data System (ADS)
Guz, Łukasz; Sobczuk, Henryk; Połednik, Bernard; Guz, Ewa
2016-07-01
The paper presents in situ method of peltier psychrometric sensors calibration which allow to determine water potential. Water potential can be easily recalculated into moisture content of the porous material. In order to obtain correct results of water potential, each probe should be calibrated. NaCl salt solutions with molar concentration of 0.4M, 0.7M, 1.0M and 1.4M, were used for calibration which enabled to obtain osmotic potential in range: -1791 kPa to -6487 kPa. Traditionally, the value of voltage generated on thermocouples during wet-bulb temperature depression is calculated in order to determine the calibration function for psychrometric in situ sensors. In the new method of calibration, the field under psychrometric curve along with peltier cooling current and duration was taken into consideration. During calibration, different cooling currents were applied for each salt solution, i.e. 3, 5, 8 mA respectively, as well as different cooling duration for each current (from 2 to 100 sec with 2 sec step). Afterwards, the shape of each psychrometric curve was thoroughly examined and a value of field under psychrometric curve was computed. Results of experiment indicate that there is a robust correlation between field under psychrometric curve and water potential. Calibrations formulas were designated on the basis of these features.
X-ray Diffraction Crystal Calibration and Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael J. Haugh; Richard Stewart; Nathan Kugland
2009-06-05
National Security Technologies’ X-ray Laboratory is comprised of a multi-anode Manson type source and a Henke type source that incorporates a dual goniometer and XYZ translation stage. The first goniometer is used to isolate a particular spectral band. The Manson operates up to 10 kV and the Henke up to 20 kV. The Henke rotation stages and translation stages are automated. Procedures have been developed to characterize and calibrate various NIF diagnostics and their components. The diagnostics include X-ray cameras, gated imagers, streak cameras, and other X-ray imaging systems. Components that have been analyzed include filters, filter arrays, grazing incidencemore » mirrors, and various crystals, both flat and curved. Recent efforts on the Henke system are aimed at characterizing and calibrating imaging crystals and curved crystals used as the major component of an X-ray spectrometer. The presentation will concentrate on these results. The work has been done at energies ranging from 3 keV to 16 keV. The major goal was to evaluate the performance quality of the crystal for its intended application. For the imaging crystals we measured the laser beam reflection offset from the X-ray beam and the reflectivity curves. For the curved spectrometer crystal, which was a natural crystal, resolving power was critical. It was first necessary to find sources of crystals that had sufficiently narrow reflectivity curves. It was then necessary to determine which crystals retained their resolving power after being thinned and glued to a curved substrate.« less
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimer, P J; Baillie, M L; Bard, E
2005-10-02
Radiocarbon calibration curves are essential for converting radiocarbon dated chronologies to the calendar timescale. Prior to the 1980's numerous differently derived calibration curves based on radiocarbon ages of known age material were in use, resulting in ''apples and oranges'' comparisons between various records (Klein et al., 1982), further complicated by until then unappreciated inter-laboratory variations (International Study Group, 1982). The solution was to produce an internationally-agreed calibration curve based on carefully screened data with updates at 4-6 year intervals (Klein et al., 1982; Stuiver and Reimer, 1986; Stuiver and Reimer, 1993; Stuiver et al., 1998). The IntCal working group hasmore » continued this tradition with the active participation of researchers who produced the records that were considered for incorporation into the current, internationally-ratified calibration curves, IntCal04, SHCal04, and Marine04, for Northern Hemisphere terrestrial, Southern Hemisphere terrestrial, and marine samples, respectively (Reimer et al., 2004; Hughen et al., 2004; McCormac et al., 2004). Fairbanks et al. (2005), accompanied by a more technical paper, Chiu et al. (2005), and an introductory comment, Adkins (2005), recently published a ''calibration curve spanning 0-50,000 years''. Fairbanks et al. (2005) and Chiu et al. (2005) have made a significant contribution to the database on which the IntCal04 and Marine04 calibration curves are based. These authors have now taken the further step to derive their own radiocarbon calibration extending to 50,000 cal BP, which they claim is superior to that generated by the IntCal working group. In their papers, these authors are strongly critical of the IntCal calibration efforts for what they claim to be inadequate screening and sample pretreatment methods. While these criticisms may ultimately be helpful in identifying a better set of protocols, we feel that there are also several erroneous and misleading statements made by these authors which require a response by the IntCal working group. Furthermore, we would like to comment on the sample selection criteria, pretreatment methods, and statistical methods utilized by Fairbanks et al. in derivation of their own radiocarbon calibration.« less
Zastrow, Stefan; Brookman-May, Sabine; Cong, Thi Anh Phuong; Jurk, Stanislaw; von Bar, Immanuel; Novotny, Vladimir; Wirth, Manfred
2015-03-01
To predict outcome of patients with renal cell carcinoma (RCC) who undergo surgical therapy, risk models and nomograms are valuable tools. External validation on independent datasets is crucial for evaluating accuracy and generalizability of these models. The objective of the present study was to externally validate the postoperative nomogram developed by Karakiewicz et al. for prediction of cancer-specific survival. A total of 1,480 consecutive patients with a median follow-up of 82 months (IQR 46-128) were included into this analysis with 268 RCC-specific deaths. Nomogram-estimated survival probabilities were compared with survival probabilities of the actual cohort, and concordance indices were calculated. Calibration plots and decision curve analyses were used for evaluating calibration and clinical net benefit of the nomogram. Concordance between predictions of the nomogram and survival rates of the cohort was 0.911 after 12, 0.909 after 24 months and 0.896 after 60 months. Comparison of predicted probabilities and actual survival estimates with calibration plots showed an overestimation of tumor-specific survival based on nomogram predictions of high-risk patients, although calibration plots showed a reasonable calibration for probability ranges of interest. Decision curve analysis showed a positive net benefit of nomogram predictions for our patient cohort. The postoperative Karakiewicz nomogram provides a good concordance in this external cohort and is reasonably calibrated. It may overestimate tumor-specific survival in high-risk patients, which should be kept in mind when counseling patients. A positive net benefit of nomogram predictions was proven.
Pajic, J; Rakic, B; Jovicic, D; Milovanovic, A
2014-10-01
Biological dosimetry using chromosome damage biomarkers is a valuable dose assessment method in cases of radiation overexposure with or without physical dosimetry data. In order to estimate dose by biodosimetry, any biological dosimetry service have to have its own dose response calibration curve. This paper reveals the results obtained after irradiation of blood samples from fourteen healthy male and female volunteers in order to establish biodosimetry in Serbia and produce dose response calibration curves for dicentrics and micronuclei. Taking into account pooled data from all the donors, the resultant fitted curve for dicentrics is: Ydic=0.0009 (±0.0003)+0.0421 (±0.0042)×D+0.0602 (±0.0022)×D(2); and for micronuclei: Ymn=0.0104 (±0.0015)+0.0824 (±0.0050)×D+0.0189 (±0.0017)×D(2). Following establishment of the dose response curve, a validation experiment was carried out with four blood samples. Applied and estimated doses were in good agreement. On this basis, the results reported here give us confidence to apply both calibration curves for future biological dosimetry requirements in Serbia. Copyright © 2014 Elsevier B.V. All rights reserved.
Dependence of magnetic permeability on residual stresses in alloyed steels
NASA Astrophysics Data System (ADS)
Hristoforou, E.; Ktena, A.; Vourna, P.; Argiris, K.
2018-04-01
A method for the monitoring of residual stress distribution in steels has been developed based on non-destructive surface magnetic permeability measurements. In order to investigate the potential utilization of the magnetic method in evaluating residual stresses, the magnetic calibration curves of various ferromagnetic alloyed steels' grade (AISI 4140, TRIP and Duplex) were examined. X-Ray diffraction technique was used for determining surface residual stress values. The overall measurement results have shown that the residual stress determined by the magnetic method was in good agreement with the diffraction results. Further experimental investigations are required to validate the preliminary results and to verify the presence of a unique normalized magnetic stress calibration curve.
Sezer, Banu; Velioglu, Hasan Murat; Bilge, Gonca; Berkkan, Aysel; Ozdinc, Nese; Tamer, Ugur; Boyaci, Ismail Hakkı
2018-01-01
The use of Li salts in foods has been prohibited due to their negative effects on central nervous system; however, they might still be used especially in meat products as Na substitutes. Lithium can be toxic and even lethal at higher concentrations and it is not approved in foods. The present study focuses on Li analysis in meatballs by using laser induced breakdown spectroscopy (LIBS). Meatball samples were analyzed using LIBS and flame atomic absorption spectroscopy. Calibration curves were obtained by utilizing Li emission lines at 610nm and 670nm for univariate calibration. The results showed that Li calibration curve at 670nm provided successful determination of Li with 0.965 of R 2 and 4.64ppm of limit of detection (LOD) value. While Li Calibration curve obtained using emission line at 610nm generated R 2 of 0.991 and LOD of 22.6ppm, calibration curve obtained at 670nm below 1300ppm generated R 2 of 0.965 and LOD of 4.64ppm. Copyright © 2017. Published by Elsevier Ltd.
Calibrating Images from the MINERVA Cameras
NASA Astrophysics Data System (ADS)
Mercedes Colón, Ana
2016-01-01
The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.
NASA Astrophysics Data System (ADS)
Burk, D. R.; Mackey, K. G.; Hartse, H. E.
2016-12-01
We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in regional networks throughout Russia and in Central Asia.
McCabe, Bradley P; Speidel, Michael A; Pike, Tina L; Van Lysel, Michael S
2011-04-01
In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was +/- 7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.
Schühle, U; Curdt, W; Hollandt, J; Feldman, U; Lemaire, P; Wilhelm, K
2000-01-20
The Solar Ultraviolet Measurement of Emitted Radiation (SUMER) vacuum-ultraviolet spectrograph was calibrated in the laboratory before the integration of the instrument on the Solar and Heliospheric Observatory (SOHO) spacecraft in 1995. During the scientific operation of the SOHO it has been possible to track the radiometric calibration of the SUMER spectrograph since March 1996 by a strategy that employs various methods to update the calibration status and improve the coverage of the spectral calibration curve. The results for the A Detector were published previously [Appl. Opt. 36, 6416 (1997)]. During three years of operation in space, the B detector was used for two and one-half years. We describe the characteristics of the B detector and present results of the tracking and refinement of the spectral calibration curves with it. Observations of the spectra of the stars alpha and rho Leonis permit an extrapolation of the calibration curves in the range from 125 to 149.0 nm. Using a solar coronal spectrum observed above the solar disk, we can extrapolate the calibration curves by measuring emission line pairs with well-known intensity ratios. The sensitivity ratio of the two photocathode areas can be obtained by registration of many emission lines in the entire spectral range on both KBr-coated and bare parts of the detector's active surface. The results are found to be consistent with the published calibration performed in the laboratory in the wavelength range from 53 to 124 nm. We can extrapolate the calibration outside this range to 147 nm with a relative uncertainty of ?30% (1varsigma) for wavelengths longer than 125 nm and to 46.5 nm with 50% uncertainty for the short-wavelength range below 53 nm.
Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A
2012-03-07
Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J; Li, X; Liu, G
Purpose: We compare and investigate the dosimetric impacts on pencil beam scanning (PBS) proton treatment plans generated with CT calibration curves from four different CT scanners and one averaged ‘global’ CT calibration curve. Methods: The four CT scanners are located at three different hospital locations within the same health system. CT density calibration curves were collected from these scanners using the same CT calibration phantom and acquisition parameters. Mass density to HU value tables were then commissioned in a commercial treatment planning system. Five disease sites were chosen for dosimetric comparisons at brain, lung, head and neck, adrenal, and prostate.more » Three types of PBS plans were generated at each treatment site using SFUD, IMPT, and robustness optimized IMPT techniques. 3D dose differences were investigated using 3D Gamma analysis. Results: The CT calibration curves for all four scanners display very similar shapes. Large HU differences were observed at both the high HU and low HU regions of the curves. Large dose differences were generally observed at the distal edges of the beams and they are beam angle dependent. Out of the five treatment sites, lung plans exhibits the most overall range uncertainties and prostate plans have the greatest dose discrepancy. There are no significant differences between the SFUD, IMPT, and the RO-IMPT methods. 3D gamma analysis with 3%, 3 mm criteria showed all plans with greater than 95% passing rate. Two of the scanners with close HU values have negligible dose difference except for lung. Conclusion: Our study shows that there are more than 5% dosimetric differences between different CT calibration curves. PBS treatment plans generated with SFUD, IMPT, and the robustness optimized IMPT has similar sensitivity to the CT density uncertainty. More patient data and tighter gamma criteria based on structure location and size will be used for further investigation.« less
Chen, Rui; Xie, Liping; Xue, Wei; Ye, Zhangqun; Ma, Lulin; Gao, Xu; Ren, Shancheng; Wang, Fubo; Zhao, Lin; Xu, Chuanliang; Sun, Yinghao
2016-09-01
Substantial differences exist in the relationship of prostate cancer (PCa) detection rate and prostate-specific antigen (PSA) level between Western and Asian populations. Classic Western risk calculators, European Randomized Study for Screening of Prostate Cancer Risk Calculator, and Prostate Cancer Prevention Trial Risk Calculator, were shown to be not applicable in Asian populations. We aimed to develop and validate a risk calculator for predicting the probability of PCa and high-grade PCa (defined as Gleason Score sum 7 or higher) at initial prostate biopsy in Chinese men. Urology outpatients who underwent initial prostate biopsy according to the inclusion criteria were included. The multivariate logistic regression-based Chinese Prostate Cancer Consortium Risk Calculator (CPCC-RC) was constructed with cases from 2 hospitals in Shanghai. Discriminative ability, calibration and decision curve analysis were externally validated in 3 CPCC member hospitals. Of the 1,835 patients involved, PCa was identified in 338/924 (36.6%) and 294/911 (32.3%) men in the development and validation cohort, respectively. Multivariate logistic regression analyses showed that 5 predictors (age, logPSA, logPV, free PSA ratio, and digital rectal examination) were associated with PCa (Model 1) or high-grade PCa (Model 2), respectively. The area under the curve of Model 1 and Model 2 was 0.801 (95% CI: 0.771-0.831) and 0.826 (95% CI: 0.796-0.857), respectively. Both models illustrated good calibration and substantial improvement in decision curve analyses than any single predictors at all threshold probabilities. Higher predicting accuracy, better calibration, and greater clinical benefit were achieved by CPCC-RC, compared with European Randomized Study for Screening of Prostate Cancer Risk Calculator and Prostate Cancer Prevention Trial Risk Calculator in predicting PCa. CPCC-RC performed well in discrimination and calibration and decision curve analysis in external validation compared with Western risk calculators. CPCC-RC may aid in decision-making of prostate biopsy in Chinese or in other Asian populations with similar genetic and environmental backgrounds. Copyright © 2016 Elsevier Inc. All rights reserved.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
Effects of experimental design on calibration curve precision in routine analysis
Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.
1998-01-01
A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816
Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E
2014-09-16
A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.
Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M
2018-08-17
Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.
INFLUENCE OF IRON CHELATION ON R1 AND R2 CALIBRATION CURVES IN GERBIL LIVER AND HEART
Wood, John C.; Aguilar, Michelle; Otto-Duessel, Maya; Nick, Hanspeter; Nelson, Marvin D.; Moats, Rex
2008-01-01
MRI is gaining increasing importance for the noninvasive quantification of organ iron burden. Since transverse relaxation rates depend on iron distribution as well as iron concentration, physiologic and pharmacologic processes that alter iron distribution could change MRI calibration curves. This paper compares the effect of three iron chelators, deferoxamine, deferiprone, and deferasirox on R1 and R2 calibration curves according to two iron loading and chelation strategies. 33 Mongolian gerbils underwent iron loading (iron dextran 500 mg/kg/wk) for 3 weeks followed by 4 weeks of chelation. An additional 56 animals received less aggressive loading (200 mg/kg/week) for 10 weeks, followed by 12 weeks of chelation. R1 and R2 calibration curves were compared to results from 23 iron-loaded animals that had not received chelation. Acute iron loading and chelation biased R1 and R2 from the unchelated reference calibration curves but chelator-specific changes were not observed, suggesting physiologic rather than pharmacologic differences in iron distribution. Long term chelation deferiprone treatment increased liver R1 50% (p<0.01), while long term deferasirox lowered liver R2 30.9% (p<0.0001). The relationship between R1 and R2 and organ iron concentration may depend upon the acuity of iron loading and unloading as well as the iron chelator administered. PMID:18581418
Quantification of calcium using localized normalization on laser-induced breakdown spectroscopy data
NASA Astrophysics Data System (ADS)
Sabri, Nursalwanie Mohd; Haider, Zuhaib; Tufail, Kashif; Aziz, Safwan; Ali, Jalil; Wahab, Zaidan Abdul; Abbas, Zulkifly
2017-03-01
This paper focuses on localized normalization for improved calibration curves in laser-induced breakdown spectroscopy (LIBS) measurements. The calibration curves have been obtained using five samples consisting of different concentrations of calcium (Ca) in potassium bromide (KBr) matrix. The work has utilized Q-switched Nd:YAG laser installed in LIBS2500plus system with fundamental wavelength and laser energy of 650 mJ. Optimization of gate delay can be obtained from signal-to-background ratio (SBR) of Ca II 315.9 and 317.9 nm. The optimum conditions are determined in which having high spectral intensity and SBR. The highest spectral lines of ionic and emission lines of Ca at gate delay of 0.83 µs. From SBR, the optimized gate delay is at 5.42 µs for both Ca II spectral lines. Calibration curves consist of three parts; original intensity from LIBS experimentation, normalization and localized normalization of the spectral line intensity. The R2 values of the calibration curves plotted using locally normalized intensities of Ca I 610.3, 612.2 and 616.2 nm spectral lines are 0.96329, 0.97042, and 0.96131, respectively. The enhancement from calibration curves using the regression coefficient allows more accurate analysis in LIBS. At the request of all authors of the paper, and with the agreement of the Proceedings Editor, an updated version of this article was published on 24 May 2017.
Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards
NASA Technical Reports Server (NTRS)
Remer, D. S.; Moore, R. C.
1985-01-01
Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.
Purpose: In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity inmore » scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was {+-}7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.« less
Avella, Joseph; Lehrer, Michael; Zito, S William
2008-10-01
1,1-Difluoroethane (DFE), also known as Freon 152A, is a member of a class of compounds known as halogenated hydrocarbons. A number of these compounds have gained notoriety because of their ability to induce rapid onset of intoxication after inhalation exposure. Abuse of DFE has necessitated development of methods for its detection and quantitation in postmortem and human performance specimens. Furthermore, methodologies applicable to research studies are required as there have been limited toxicokinetic and toxicodynamic reports published on DFE. This paper describes a method for the quantitation of DFE using a gas chromatography-flame-ionization headspace technique that employs solventless standards for calibration. Two calibration curves using 0.5 mL whole blood calibrators which ranged from A: 0.225-1.350 to B: 9.0-180.0 mg/L were developed. These were evaluated for linearity (0.9992 and 0.9995), limit of detection of 0.018 mg/L, limit of quantitation of 0.099 mg/L (recovery 111.9%, CV 9.92%), and upper limit of linearity of 27,000.0 mg/L. Combined curve recovery results of a 98.0 mg/L DFE control that was prepared using an alternate technique was 102.2% with CV of 3.09%. No matrix interference was observed in DFE enriched blood, urine or brain specimens nor did analysis of variance detect any significant differences (alpha = 0.01) in the area under the curve of blood, urine or brain specimens at three identical DFE concentrations. The method is suitable for use in forensic laboratories because validation was performed on instrumentation routinely used in forensic labs and due to the ease with which the calibration range can be adjusted. Perhaps more importantly it is also useful for research oriented studies because the removal of solvent from standard preparation eliminates the possibility for solvent induced changes to the gas/liquid partitioning of DFE or chromatographic interference due to the presence of solvent in specimens.
Increasing the sensitivity of the Jaffe reaction for creatinine
NASA Technical Reports Server (NTRS)
Tom, H. Y.
1973-01-01
Study of analytical procedure has revealed that linearity of creatinine calibration curve can be extended by using 0.03 molar picric acid solution made up in 70 percent ethanol instead of water. Three to five times more creatinine concentration can be encompassed within linear portion of calibration curve.
Carbon-14 wiggle-match dating of peat deposits: advantages and limitations
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; van Geel, Bas; Mauquoy, Dmitri; van der Plicht, Johannes
2004-02-01
Carbon-14 wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a series of closely spaced peat 14C dates with the 14C calibration curve. The method of WMD is discussed, and its advantages and limitations are compared with calibration of individual dates. A numerical approach to WMD is introduced that makes it possible to assess the precision of WMD chronologies. During several intervals of the Holocene, the 14C calibration curve shows less pronounced fluctuations. We assess whether wiggle-matching is also a feasible strategy for these parts of the 14C calibration curve. High-precision chronologies, such as obtainable with WMD, are needed for studies of rapid climate changes and their possible causes during the Holocene. Copyright
Marine04 Marine radiocarbon age calibration, 26 ? 0 ka BP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughen, K; Baille, M; Bard, E
2004-11-01
New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration datasets extend an additional 2000 years, from 0-26 ka cal BP (Before Present, 0 cal BP = AD 1950), and provide much higher resolution, greater precision and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically dated tree-ring samples, converted with a box-diffusion model to marine mixed-layer ages, cover the period from 0-10.5 ka cal BP. Beyond 10.5 ka cal BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals.more » The marine records are corrected with site-specific {sup 14}C reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 ka cal BP. A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the radiocarbon age to calculate the underlying calibration curve. The marine datasets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring datasets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al.« less
Liu, Yongliang; Thibodeaux, Devron; Gamble, Gary; Bauer, Philip; VanDerveer, Don
2012-08-01
Despite considerable efforts in developing curve-fitting protocols to evaluate the crystallinity index (CI) from X-ray diffraction (XRD) measurements, in its present state XRD can only provide a qualitative or semi-quantitative assessment of the amounts of crystalline or amorphous fraction in a sample. The greatest barrier to establishing quantitative XRD is the lack of appropriate cellulose standards, which are needed to calibrate the XRD measurements. In practice, samples with known CI are very difficult to prepare or determine. In a previous study, we reported the development of a simple algorithm for determining fiber crystallinity information from Fourier transform infrared (FT-IR) spectroscopy. Hence, in this study we not only compared the fiber crystallinity information between FT-IR and XRD measurements, by developing a simple XRD algorithm in place of a time-consuming and subjective curve-fitting process, but we also suggested a direct way of determining cotton cellulose CI by calibrating XRD with the use of CI(IR) as references.
Contaminant concentration in environmental samples using LIBS and CF-LIBS
NASA Astrophysics Data System (ADS)
Pandhija, S.; Rai, N. K.; Rai, A. K.; Thakur, S. N.
2010-01-01
The present paper deals with the detection and quantification of toxic heavy metals like Cd, Co, Pb, Zn, Cr, etc. in environmental samples by using the technique of laser-induced breakdown spectroscopy (LIBS) and calibration-free LIBS (CF-LIBS). A MATLABTM program has been developed based on the CF-LIBS algorithm given by earlier workers and concentrations of pollutants present in industrial area soil have been determined. LIBS spectra of a number of certified reference soil samples with varying concentrations of toxic elements (Cd, Zn) have been recorded to obtain calibration curves. The concentrations of Cd and Zn in soil samples from the Jajmau area, Kanpur (India) have been determined by using these calibration curves and also by the CF-LIBS approach. Our results clearly demonstrate that the combination of LIBS and CF-LIBS is very useful for the study of pollutants in the environment. Some of the results have also been found to be in good agreement with those of ICP-OES.
Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system
NASA Astrophysics Data System (ADS)
Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan
2010-02-01
The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.
Assessment of opacimeter calibration according to International Standard Organization 10155.
Gomes, J F
2001-01-01
This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.
McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.; Van Lysel, Michael S.
2011-01-01
Purpose: In this study, newly formulated XR-RV3 GafChromic® film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was ±7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases. PMID:21626925
Improvement of immunoassay detection system by using alternating current magnetic susceptibility
NASA Astrophysics Data System (ADS)
Kawabata, R.; Mizoguchi, T.; Kandori, A.
2016-03-01
A major goal with this research was to develop a low-cost and highly sensitive immunoassay detection system by using alternating current (AC) magnetic susceptibility. We fabricated an improved prototype of our previously developed immunoassay detection system and evaluated its performance. The prototype continuously moved sample containers by using a magnetically shielded brushless motor, which passes between two anisotropic magneto resistance (AMR) sensors. These sensors detected the magnetic signal in the direction where each sample container passed them. We used the differential signal obtained from each AMR sensor's output to improve the signal-to-noise ratio (SNR) of the magnetic signal measurement. Biotin-conjugated polymer beads with avidin-coated magnetic particles were prepared to examine the calibration curve, which represents the relation between AC magnetic susceptibility change and polymer-bead concentration. For the calibration curve measurement, we, respectively, measured the magnetic signal caused by the magnetic particles by using each AMR sensor installed near the upper or lower part in the lateral position of the passing sample containers. As a result, the SNR of the prototype was 4.5 times better than that of our previous system. Moreover, the data obtained from each AMR sensor installed near the upper part in the lateral position of the passing sample containers exhibited an accurate calibration curve that represented good correlation between AC magnetic susceptibility change and polymer-bead concentration. The conclusion drawn from these findings is that our improved immunoassay detection system will enable a low-cost and highly sensitive immunoassay.
Improvement of immunoassay detection system by using alternating current magnetic susceptibility.
Kawabata, R; Mizoguchi, T; Kandori, A
2016-03-01
A major goal with this research was to develop a low-cost and highly sensitive immunoassay detection system by using alternating current (AC) magnetic susceptibility. We fabricated an improved prototype of our previously developed immunoassay detection system and evaluated its performance. The prototype continuously moved sample containers by using a magnetically shielded brushless motor, which passes between two anisotropic magneto resistance (AMR) sensors. These sensors detected the magnetic signal in the direction where each sample container passed them. We used the differential signal obtained from each AMR sensor's output to improve the signal-to-noise ratio (SNR) of the magnetic signal measurement. Biotin-conjugated polymer beads with avidin-coated magnetic particles were prepared to examine the calibration curve, which represents the relation between AC magnetic susceptibility change and polymer-bead concentration. For the calibration curve measurement, we, respectively, measured the magnetic signal caused by the magnetic particles by using each AMR sensor installed near the upper or lower part in the lateral position of the passing sample containers. As a result, the SNR of the prototype was 4.5 times better than that of our previous system. Moreover, the data obtained from each AMR sensor installed near the upper part in the lateral position of the passing sample containers exhibited an accurate calibration curve that represented good correlation between AC magnetic susceptibility change and polymer-bead concentration. The conclusion drawn from these findings is that our improved immunoassay detection system will enable a low-cost and highly sensitive immunoassay.
NASA Astrophysics Data System (ADS)
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang
2015-10-01
GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90% ± 1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration curve.
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model
Wu, J.; Shenk, G.W.; Raffensperger, Jeff P.; Moyer, D.; Linker, L.C.; ,
2005-01-01
Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Linking Item Parameters to a Base Scale. ACT Research Report Series, 2009-2
ERIC Educational Resources Information Center
Kang, Taehoon; Petersen, Nancy S.
2009-01-01
This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord (1983) characteristic curve method…
Molar mass characterization of sodium carboxymethyl cellulose by SEC-MALLS.
Shakun, Maryia; Maier, Helena; Heinze, Thomas; Kilz, Peter; Radke, Wolfgang
2013-06-05
Two series of sodium carboxymethyl celluloses (NaCMCs) derived from microcrystalline cellulose (Avicel samples) and cotton linters (BWL samples) with average degrees of substitution (DS) ranging from DS=0.45 to DS=1.55 were characterized by size exclusion chromatography with multi-angle laser light scattering detection (SEC-MALLS) in 100 mmol/L aqueous ammonium acetate (NH4OAc) as vaporizable eluent system. The application of vaporizable NH4OAc allows future use of the eluent system in two-dimensional separations employing evaporative light scattering detection (ELSD). The losses of samples during filtration and during the chromatographic experiment were determined. The scaling exponent as of the relation [Formula: see text] was approx. 0.61, showing that NaCMCs exhibit an expanded coil conformation in solution. No systematic dependencies of as on DS were observed. The dependences of molar mass on SEC-elution volume for samples of different DS can be well described by a common calibration curve, which is of advantage, as it allows the determination of molar masses of unknown samples by using the same calibration curve, irrespective of the DS of the NaCMC sample. Since no commercial NaCMC standards are available, correction factors were determined allowing converting a pullulan based calibration curve into a NaCMC calibration using the broad calibration approach. The weight average molar masses derived using the so established calibration curve closely agree with the ones determined by light scattering, proving the accuracy of the correction factors determined. Copyright © 2013 Elsevier Ltd. All rights reserved.
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.
Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C
2008-07-21
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.
NASA Technical Reports Server (NTRS)
Haynie, C. C.
1980-01-01
Simple gage, used with template, can help inspectors determine whether three-dimensional curved surface has correct contour. Gage was developed as aid in explosive forming of Space Shuttle emergency-escape hatch. For even greater accuracy, wedge can be made of metal and calibrated by indexing machine.
A rapid tool for determination of titanium dioxide content in white chickpea samples.
Sezer, Banu; Bilge, Gonca; Berkkan, Aysel; Tamer, Ugur; Hakki Boyaci, Ismail
2018-02-01
Titanium dioxide (TiO 2 ) is a widely used additive in foods. However, in the scientific community there is an ongoing debate on health concerns about TiO 2 . The main goal of this study is to determine TiO 2 content by using laser induced breakdown spectroscopy (LIBS). To this end, different amounts of TiO 2 was added to white chickpeas and analyzed by using LIBS. Calibration curve was obtained by following Ti emissions at 390.11nm for univariate calibration, and partial least square (PLS) calibration curve was obtained by evaluating the whole spectra. The results showed that Ti calibration curve at 390.11nm provides successful determination of Ti level with 0.985 of R 2 and 33.9ppm of limit of detection (LOD) value, while PLS has 0.989 of R 2 and 60.9ppm of LOD. Furthermore, commercial white chickpea samples were used to validate the method, and validation R 2 for simple calibration and PLS were calculated as 0.989 and 0.951, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nakamura, Hideaki; Tohyama, Kana; Tanaka, Masanori; Shinohara, Shouji; Tokunaga, Yuichi; Kurusu, Fumiyo; Koide, Satoshi; Gotoh, Masao; Karube, Isao
2007-12-15
A package-free transparent disposable biosensor chip was developed by a screen-printing technique. The biosensor chip was fabricated by stacking a substrate with two carbon electrodes on its surface, a spacer consisting of a resist layer and an adhesive layer, and a cover. The structure of the chip keeps the interior of the reaction-detecting section airtight until use. The chip is equipped with double electrochemical measuring elements for the simultaneous measurement of multiple items, and the reagent layer was developed in sample-feeding path. The sample-inlet port and air-discharge port are simultaneously opened by longitudinally folding in two biosensor units with a notch as a boundary. Then the shape of the chip is changed to a V-shape. The reaction-detecting section of the chip has a 1.0 microl sample volume for one biosensor unit. Excellent results were obtained with the chip in initial simultaneous chronoamperometric measurements of both glucose (r=1.00) and lactate (r=0.998) in the same samples. The stability of the enzyme sensor signals of the chip was estimated at ambient atmosphere on 8 testing days during a 6-month period. The results were compared with those obtained for an unpackaged chip used as a control. The package-free chip proved to be twice as good as the control chip in terms of the reproducibility of slopes from 16 calibration curves (one calibration curve: 0, 100, 300, 500 mg dl(-1) glucose; n=3) and 4.6 times better in terms of the reproducibility of correlation coefficients from the 16 calibration curves.
Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine
2017-03-01
Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.
NASA Astrophysics Data System (ADS)
Rahn, Helene; Alexiou, Christoph; Trahms, Lutz; Odenbach, Stefan
2014-06-01
X-ray computed tomography is nowadays used for a wide range of applications in medicine, science and technology. X-ray microcomputed tomography (XμCT) follows the same principles used for conventional medical CT scanners, but improves the spatial resolution to a few micrometers. We present an example of an application of X-ray microtomography, a study of 3-dimensional biodistribution, as along with the quantification of nanoparticle content in tumoral tissue after minimally invasive cancer therapy. One of these minimal invasive cancer treatments is magnetic drug targeting, where the magnetic nanoparticles are used as controllable drug carriers. The quantification is based on a calibration of the XμCT-equipment. The developed calibration procedure of the X-ray-μCT-equipment is based on a phantom system which allows the discrimination between the various gray values of the data set. These phantoms consist of a biological tissue substitute and magnetic nanoparticles. The phantoms have been studied with XμCT and have been examined magnetically. The obtained gray values and nanoparticle concentration lead to a calibration curve. This curve can be applied to tomographic data sets. Accordingly, this calibration enables a voxel-wise assignment of gray values in the digital tomographic data set to nanoparticle content. Thus, the calibration procedure enables a 3-dimensional study of nanoparticle distribution as well as concentration.
Calibration of streamflow gauging stations at the Tenderfoot Creek Experimental Forest
Scott W. Woods
2007-01-01
We used tracer based methods to calibrate eleven streamflow gauging stations at the Tenderfoot Creek Experimental Forest in western Montana. At six of the stations the measured flows were consistent with the existing rating curves. At Lower and Upper Stringer Creek, Upper Sun Creek and Upper Tenderfoot Creek the published flows, based on the existing rating curves,...
NASA Astrophysics Data System (ADS)
Chen, Gang; Chen, Yanping; Zheng, Xiongwei; He, Cheng; Lu, Jianping; Feng, Shangyuan; Chen, Rong; Zeng, Haisan
2013-12-01
In this work, we developed a SERS platform for quantitative detection of carcinoembryonic antigen (CEA) in serum of patients with colorectal cancers. Anti-CEA-functionalized 4-mercaptobenzoic acid-labeled Au/Ag core-shell bimetallic nanoparticles were prepared first and then used to analyze CEA antigen solutions of different concentrations. A calibration curve was established in the range from 5 × 10-3 to 5 × 105 ng/mL. Finally, this new SERS probe was applied for quantitative detection of CEA in serum obtained from 26 colorectal cancer patients according to the calibration curve. The results were in good agreement with that obtained by electrochemical luminescence method, suggesting that SERS immunoassay has high sensitivity and specificity for CEA detection in serum. A detection limit of 5 pg/ml was achieved. This study demonstrated the feasibility and great potential for developing this new technology into a clinical tool for analysis of tumor markers in the blood.
Measuring the radon concentration in air meting van de radonconcentratie in lucht
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aten, J.B.T.; Bierhuizen, H.W.J.; Vanhoek, L.P.
1975-01-01
A simple transportable apparatus for measurement of the radon concentration in the air of a workshop was developed. An air sample is sucked through a filter and the decay curve of the alpha activity is measured. The counting speed 40 min after sampling gives an indication of the radon activity. The apparatus was calibrated by analyzing an analogous decay curve obtained with a big filter and a big air sample, the activity being measured with an anti-coincidence counter. (GRA)
Fukuda, Ikuma; Hayashi, Hiroaki; Takegami, Kazuki; Konishi, Yuki
2013-09-01
Diagnostic X-ray equipment was used to develop an experimental apparatus for calibrating a CdTe detector. Powder-type samples were irradiated with collimated X-rays. On excitation of the atoms, characteristic X-rays were emitted. We prepared Nb2O5, SnO2, La2O3, Gd2O3, and WO3 metal oxide samples. Experiments using the diagnostic X-ray equipment were carried out to verify the practicality of our apparatus. First, we verified that the collimators involving the apparatus worked well. Second, the X-ray spectra were measured using the prepared samples. Finally, we analyzed the spectra, which indicated that the energy calibration curve had been obtained at an accuracy of ±0.06 keV. The developed apparatus could be used conveniently, suggesting it to be useful for the practical training of beginners and researchers.
NASA Astrophysics Data System (ADS)
Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin
2018-04-01
Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium, and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a “double calibration curve” method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.
NASA Astrophysics Data System (ADS)
Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.
2018-02-01
An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.
Utility of mass spectrometry in the diagnosis of prion diseases
USDA-ARS?s Scientific Manuscript database
We developed a sensitive mass spectrometry-based method of quantitating the prions present in a variety of mammalian species. Calibration curves relating the area ratios of the selected analyte peptides and their oxidized analogs to their homologous stable isotope labeled internal standards were pre...
Mocho, Pierre; Desauziers, Valérie
2011-05-01
Solid-phase microextraction (SPME) is a powerful technique, easy to implement for on-site static sampling of indoor VOCs emitted by building materials. However, a major constraint lies in the establishment of calibration curves which requires complex generation of standard atmospheres. Thus, the purpose of this paper is to propose a model to predict adsorption kinetics (i.e., calibration curves) of four model VOCs. The model is based on Fick's laws for the gas phase and on the equilibrium or the solid diffusion model for the adsorptive phase. Two samplers (the FLEC® and a home-made cylindrical emission cell), coupled to SPME for static sampling of material emissions, were studied. A good agreement between modeling and experimental data is observed and results show the influence of sampling rate on mass transfer mode in function of sample volume. The equilibrium model is adapted to quite large volume sampler (cylindrical cell) while the solid diffusion model is dedicated to small volume sampler (FLEC®). The limiting steps of mass transfer are the diffusion in gas phase for the cylindrical cell and the pore surface diffusion for the FLEC®. In the future, this modeling approach could be a useful tool for time-saving development of SPME to study building material emission in static mode sampling.
ESR/Alanine gamma-dosimetry in the 10-30 Gy range.
Fainstein, C; Winkler, E; Saravi, M
2000-05-01
We report Alanine Dosimeter preparation, procedures for using the ESR/Dosimetry method, and the resulting calibration curve for gamma-irradiation in the range from 10-30 Gy. We use calibration curve to measure the irradiation dose in gamma-irradiation of human blood, as required in Blood Transfusion Therapy. The ESR/Alanine results are compared against those obtained using the thermoluminescent dosimetry (TLD) method.
Errors introduced by dose scaling for relative dosimetry
Watanabe, Yoichi; Hayashi, Naoki
2012-01-01
Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658
Satellite Calibration With LED Detectors at Mud Lake
NASA Technical Reports Server (NTRS)
Hiller, Jonathan D.
2005-01-01
Earth-monitoring instruments in orbit must be routinely calibrated in order to accurately analyze the data obtained. By comparing radiometric measurements taken on the ground in conjunction with a satellite overpass, calibration curves are derived for an orbiting instrument. A permanent, automated facility is planned for Mud Lake, Nevada (a large, homogeneous, dry lakebed) for this purpose. Because some orbiting instruments have low resolution (250 meters per pixel), inexpensive radiometers using LEDs as sensors are being developed to array widely over the lakebed. LEDs are ideal because they are inexpensive, reliable, and sense over a narrow bandwidth. By obtaining and averaging widespread data, errors are reduced and long-term surface changes can be more accurately observed.
Odegård, M; Mansfeld, J; Dundas, S H
2001-08-01
Calibration materials for microanalysis of Ti minerals have been prepared by direct fusion of synthetic and natural materials by resistance heating in high-purity graphite electrodes. Synthetic materials were FeTiO3 and TiO2 reagents doped with minor and trace elements; CRMs for ilmenite, rutile, and a Ti-rich magnetite were used as natural materials. Problems occurred during fusion of Fe2O3-rich materials, because at atmospheric pressure Fe2O3 decomposes into Fe3O4 and O2 at 1462 degrees C. An alternative fusion technique under pressure was tested, but the resulting materials were characterized by extensive segregation and development of separate phases. Fe2O3-rich materials were therefore fused below this temperature, resulting in a form of sintering, without conversion of the materials into amorphous glasses. The fused materials were studied by optical microscopy and EPMA, and tested as calibration materials by inductively coupled plasma mass spectrometry, equipped with laser ablation for sample introduction (LA-ICP-MS). It was demonstrated that calibration curves based on materials of rutile composition, within normal analytical uncertainty, generally coincide with calibration curves based on materials of ilmenite composition. It is, therefore, concluded that LA-ICP-MS analysis of Ti minerals can with advantage be based exclusively on calibration materials prepared for rutile, thereby avoiding the special fusion problems related to oxide mixtures of ilmenite composition. It is documented that sintered materials were in good overall agreement with homogeneous glass materials, an observation that indicates that in other situations also sintered mineral concentrates might be a useful alternative for instrument calibration, e.g. as alternative to pressed powders.
Jankowski, Clémentine; Guiu, S; Cortet, M; Charon-Barra, C; Desmoulins, I; Lorgis, V; Arnould, L; Fumoleau, P; Coudert, B; Rouzier, R; Coutant, C; Reyal, F
2017-01-01
The aim of this study was to assess the Institut Gustave Roussy/M.D. Anderson Cancer Center (IGR/MDACC) nomogram in predicting pathologic complete response (pCR) to preoperative chemotherapy in a cohort of human epidermal growth factor receptor 2 (HER2)-positive tumors treated with preoperative chemotherapy with trastuzumab. We then combine clinical and pathological variables associated with pCR into a new nomogram specific to HER2-positive tumors treated by preoperative chemotherapy with trastuzumab. Data from 270 patients with HER2-positive tumors treated with preoperative chemotherapy with trastuzumab at the Institut Curie and at the Georges François Leclerc Cancer Center were used to assess the IGR/MDACC nomogram and to subsequently develop a new nomogram for pCR based on multivariate logistic regression. Model performance was quantified in terms of calibration and discrimination. We studied the utility of the new nomogram using decision curve analysis. The IGR/MDACC nomogram was not accurate for the prediction of pCR in HER2-positive tumors treated by preoperative chemotherapy with trastuzumab, with poor discrimination (AUC = 0.54, 95% CI 0.51-0.58) and poor calibration (p = 0.01). After uni- and multivariate analysis, a new pCR nomogram was built based on T stage (TNM), hormone receptor status, and Ki67 (%). The model had good discrimination with an area under the curve (AUC) at 0.74 (95% CI 0.70-0.79) and adequate calibration (p = 0.93). By decision curve analysis, the model was shown to be relevant between thresholds of 0.3 and 0.7. To the best of our knowledge, ours is the first nomogram to predict pCR in HER2-positive tumors treated by preoperative chemotherapy with trastuzumab. To ensure generalizability, this model needs to be externally validated.
Peng, Rong-fei; He, Jia-yao; Zhang, Zhan-xia
2002-02-01
The performances of a self-constructed visible AOTF spectrophotometer are presented. The wavelength calibration of AOTF1 and AOTF2 are performed with a didymium glass using a fourth-order polynomial curve fitting method. The absolute error of the peak position is usually less than 0.7 nm. Compared with the commercial UV1100 spectrophotometer, the scanning speed of the AOTF spectrophotometer is much more faster, but the resolution depends on the quality of AOTF. The absorption spectra and the calibration curves of copper sulfate and alizarin red obtained with AOTF1(Institute for Silicate, Shanghai China) and AOTF2 (Brimrose U.S.A) respectively are presented. Their corresponding correlation coefficients of the calibration curves are 0.9991 and 0.9990 respectively. Preliminary results show that the self-constructed AOTF spectrophotometer is feasible.
2013-02-11
calibration curves was ±5%. Ion chromatography (IC) was used for analysis of perchlorate and other ionic targets. Analysis was carried out on a...The methods utilize liquid or gas chromatography , techniques that do not lend themselves well to portable devices and methods. Portable methods are...
Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor
NASA Astrophysics Data System (ADS)
Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.
2018-01-01
Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.
2007-09-01
Calibration curves for CT number ( Hounsfield unit )s vs. mineral density (g /c c...12 3 Figure 3.4. Calibration curves for CT number ( Hounsfield units ) vs. apparent density (g /c c...named Hounsfield units (HU) after Sir Godfrey Hounsfield . The CT number is K([i- iw]/pw), where K = a magnifying constant, which depends on the make of CT
Fu, J; Li, L; Yang, X Q; Zhu, M J
2011-01-01
Leucine carboxypeptidase (EC 3.4.16) activity in Actinomucor elegans bran koji was investigated via absorbance at 507 nm after stained by Cd-nihydrin solution, with calibration curve A, which was made by a set of known concentration standard leucine, calibration B, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations inactive crude enzyme extract, and calibration C, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations crude enzyme extract. The results indicated that application of pure amino acid standard curve was not a suitable way to determine carboxypeptidase in complicate mixture, and it probably led to overestimated carboxypeptidase activity. It was found that addition of crude exact into pure amino acid standard curve had a significant difference from pure amino acid standard curve method (p < 0.05). There was no significant enzyme activity difference (p > 0.05) between addition of active crude exact and addition of inactive crude kind, when the proper dilute multiple was used. It was concluded that the addition of crude enzyme extract to the calibration was needed to eliminate the interference of free amino acids and related compounds presented in crude enzyme extract.
NASA Technical Reports Server (NTRS)
Usry, J. W.; Whitlock, C. H.
1981-01-01
Management of water resources such as a reservoir requires using analytical models which describe such parameters as the suspended sediment field. To select or develop an appropriate model requires making many measurements to describe the distribution of this parameter in the water column. One potential method for making those measurements expeditiously is to measure light transmission or turbidity and relate that parameter to total suspended solids concentrations. An instrument which may be used for this purpose was calibrated by generating curves of transmission measurements plotted against measured values of total suspended solids concentrations and beam attenuation coefficients. Results of these experiments indicate that field measurements made with this instrument using curves generated in this study should correlate with total suspended solids concentrations and beam attenuation coefficients in the water column within 20 percent.
NASA Technical Reports Server (NTRS)
Moses, J. Daniel
1989-01-01
Three improvements in photographic x-ray imaging techniques for solar astronomy are presented. The testing and calibration of a new film processor was conducted; the resulting product will allow photometric development of sounding rocket flight film immediately upon recovery at the missile range. Two fine grained photographic films were calibrated and flight tested to provide alternative detector choices when the need for high resolution is greater than the need for high sensitivity. An analysis technique used to obtain the characteristic curve directly from photographs of UV solar spectra were applied to the analysis of soft x-ray photographic images. The resulting procedure provides a more complete and straightforward determination of the parameters describing the x-ray characteristic curve than previous techniques. These improvements fall into the category of refinements instead of revolutions, indicating the fundamental suitability of the photographic process for x-ray imaging in solar astronomy.
A calibration method for patient specific IMRT QA using a single therapy verification film
Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.
2013-01-01
Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly accurate IMRT verification. PMID:24416558
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, J; Lasio, G; Chen, S
2015-06-15
Purpose: To develop a CBCT HU correction method using a patient specific HU to mass density conversion curve based on a novel image registration and organ mapping method for head-and-neck radiation therapy. Methods: There are three steps to generate a patient specific CBCT HU to mass density conversion curve. First, we developed a novel robust image registration method based on sparseness analysis to register the planning CT (PCT) and the CBCT. Second, a novel organ mapping method was developed to transfer the organs at risk (OAR) contours from the PCT to the CBCT and corresponding mean HU values of eachmore » OAR were measured in both the PCT and CBCT volumes. Third, a set of PCT and CBCT HU to mass density conversion curves were created based on the mean HU values of OARs and the corresponding mass density of the OAR in the PCT. Then, we compared our proposed conversion curve with the traditional Catphan phantom based CBCT HU to mass density calibration curve. Both curves were input into the treatment planning system (TPS) for dose calculation. Last, the PTV and OAR doses, DVH and dose distributions of CBCT plans are compared to the original treatment plan. Results: One head-and-neck cases which contained a pair of PCT and CBCT was used. The dose differences between the PCT and CBCT plans using the proposed method are −1.33% for the mean PTV, 0.06% for PTV D95%, and −0.56% for the left neck. The dose differences between plans of PCT and CBCT corrected using the CATPhan based method are −4.39% for mean PTV, 4.07% for PTV D95%, and −2.01% for the left neck. Conclusion: The proposed CBCT HU correction method achieves better agreement with the original treatment plan compared to the traditional CATPhan based calibration method.« less
Gil, Jeovanis; Cabrales, Ania; Reyes, Osvaldo; Morera, Vivian; Betancourt, Lázaro; Sánchez, Aniel; García, Gerardo; Moya, Galina; Padrón, Gabriel; Besada, Vladimir; González, Luis Javier
2012-02-23
Growth hormone-releasing peptide 6 (GHRP-6, His-(DTrp)-Ala-Trp-(DPhe)-Lys-NH₂, MW=872.44 Da) is a potent growth hormone secretagogue that exhibits a cytoprotective effect, maintaining tissue viability during acute ischemia/reperfusion episodes in different organs like small bowel, liver and kidneys. In the present work a quantitative method to analyze GHRP-6 in human plasma was developed and fully validated following FDA guidelines. The method uses an internal standard (IS) of GHRP-6 with ¹³C-labeled Alanine for quantification. Sample processing includes a precipitation step with cold acetone to remove the most abundant plasma proteins, recovering the GHRP-6 peptide with a high yield. Quantification was achieved by LC-MS in positive full scan mode in a Q-Tof mass spectrometer. The sensitivity of the method was evaluated, establishing the lower limit of quantification at 5 ng/mL and a range for the calibration curve from 5 ng/mL to 50 ng/mL. A dilution integrity test was performed to analyze samples at higher concentration of GHRP-6. The validation process involved five calibration curves and the analysis of quality control samples to determine accuracy and precision. The calibration curves showed R² higher than 0.988. The stability of the analyte and its internal standard (IS) was demonstrated in all conditions the samples would experience in a real time analyses. This method was applied to the quantification of GHRP-6 in plasma from nine healthy volunteers participating in a phase I clinical trial. Copyright © 2011 Elsevier B.V. All rights reserved.
Mo, Shaobo; Dai, Weixing; Xiang, Wenqiang; Li, Qingguo; Wang, Renjie; Cai, Guoxiang
2018-05-03
The objective of this study was to summarize the clinicopathological and molecular features of synchronous colorectal peritoneal metastases (CPM). We then combined clinical and pathological variables associated with synchronous CPM into a nomogram and confirmed its utilities using decision curve analysis. Synchronous metastatic colorectal cancer (mCRC) patients who received primary tumor resection and underwent KRAS, NRAS, and BRAF gene mutation detection at our center from January 2014 to September 2015 were included in this retrospective study. An analysis was performed to investigate the clinicopathological and molecular features for independent risk factors of synchronous CPM and to subsequently develop a nomogram for synchronous CPM based on multivariate logistic regression. Model performance was quantified in terms of calibration and discrimination. We studied the utility of the nomogram using decision curve analysis. In total, 226 patients were diagnosed with synchronous mCRC, of whom 50 patients (22.1%) presented with CPM. After uni- and multivariate analysis, a nomogram was built based on tumor site, histological type, age, and T4 status. The model had good discrimination with an area under the curve (AUC) at 0.777 (95% CI 0.703-0.850) and adequate calibration. By decision curve analysis, the model was shown to be relevant between thresholds of 0.10 and 0.66. Synchronous CPM is more likely to happen to patients with age ≤60, right-sided primary lesions, signet ring cell cancer or T4 stage. This is the first nomogram to predict synchronous CPM. To ensure generalizability, this model needs to be externally validated. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Calibrations between the variables of microbial TTI response and ground pork qualities.
Kim, Eunji; Choi, Dong Yeol; Kim, Hyun Chul; Kim, Keehyuk; Lee, Seung Ju
2013-10-01
A time-temperature indicator (TTI) based on a lactic acid bacterium, Weissella cibaria CIFP009, was applied to ground pork packaging. Calibration curves between TTI response and pork qualities were obtained from storage tests at 2°C, 10°C, and 13°C. The curves of the TTI vs. total cell number at different temperatures coincided to the greatest extent, indicating the highest representativeness of calibration, by showing the least coefficient of variance (CV=11%) of the quality variables at a given TTI response (titratable acidity) on the curves, followed by pH (23%), volatile basic nitrogen (VBN) (25%), and thiobarbituric acid-reactive substances (TBARS) (47%). Similarity of Arrhenius activation energy (Ea) could also reflect the representativeness of calibration. The total cell number (104.9 kJ/mol) was found to be the most similar to that of the TTI response (106.2 kJ/mol), followed by pH (113.6 kJ/mol), VBN (77.4 kJ/mol), and TBARS (55.0 kJ/mol). Copyright © 2013 Elsevier Ltd. All rights reserved.
Flight calibration tests of a nose-boom-mounted fixed hemispherical flow-direction sensor
NASA Technical Reports Server (NTRS)
Armistead, K. H.; Webb, L. D.
1973-01-01
Flight calibrations of a fixed hemispherical flow angle-of-attack and angle-of-sideslip sensor were made from Mach numbers of 0.5 to 1.8. Maneuvers were performed by an F-104 airplane at selected altitudes to compare the measurement of flow angle of attack from the fixed hemispherical sensor with that from a standard angle-of-attack vane. The hemispherical flow-direction sensor measured differential pressure at two angle-of-attack ports and two angle-of-sideslip ports in diametrically opposed positions. Stagnation pressure was measured at a center port. The results of these tests showed that the calibration curves for the hemispherical flow-direction sensor were linear for angles of attack up to 13 deg. The overall uncertainty in determining angle of attack from these curves was plus or minus 0.35 deg or less. A Mach number position error calibration curve was also obtained for the hemispherical flow-direction sensor. The hemispherical flow-direction sensor exhibited a much larger position error than a standard uncompensated pitot-static probe.
Chun, Hao-Jung; Poklis, Justin L.; Poklis, Alphonse; Wolf, Carl E.
2016-01-01
Ethanol is the most widely used and abused drug. While blood is the preferred specimen for analysis, tissue specimens such as brain serve as alternative specimens for alcohol analysis in post-mortem cases where blood is unavailable or contaminated. A method was developed using headspace gas chromatography with flame ionization detection (HS-GC-FID) for the detection and quantification of ethanol, acetone, isopropanol, methanol and n-propanol in brain tissue specimens. Unfixed volatile-free brain tissue specimens were obtained from the Department of Pathology at Virginia Commonwealth University. Calibrators and controls were prepared from 4-fold diluted homogenates of these brain tissue specimens, and were analyzed using t-butanol as the internal standard. The chromatographic separation was performed with a Restek BAC2 column. A linear calibration was generated for all analytes (mean r2 > 0.9992) with the limits of detection and quantification of 100–110 mg/kg. Matrix effect from the brain tissue was determined by comparing the slopes of matrix prepared calibration curves with those of aqueous calibration curves; no significant differences were observed for ethanol, acetone, isopropanol, methanol and n-propanol. The bias and the CVs for all volatile controls were ≤10%. The method was also evaluated for carryover, selectivity, interferences, bench-top stability and freeze-thaw stability. The HS-GC-FID method was determined to be reliable and robust for the analysis of ethanol, acetone, isopropanol, methanol and n-propanol concentrations in brain tissue, effectively expanding the specimen options for post-mortem alcohol analysis. PMID:27488829
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
A dose-response curve for biodosimetry from a 6 MV electron linear accelerator
Lemos-Pinto, M.M.P.; Cadena, M.; Santos, N.; Fernandes, T.S.; Borges, E.; Amaral, A.
2015-01-01
Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates. PMID:26445334
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
1996-01-01
Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.
Fast and robust curve skeletonization for real-world elongated objects
USDA-ARS?s Scientific Manuscript database
These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...
See the Light! A Nice Application of Calculus to Chemistry
ERIC Educational Resources Information Center
Boersma, Stuart; McGowan, Garrett
2007-01-01
Some simple modeling with Riemann sums can be used to develop Beer's Law, which describes the relationship between the absorbance of light and the concentration of the solution which the light is penetrating. A further application of the usefulness of Beer's Law in creating calibration curves is also presented. (Contains 3 figures.)
Pockels-effect cell for gas-flow simulation
NASA Astrophysics Data System (ADS)
Weimer, D.
1982-05-01
A Pockels effect cell using a 75 cu cm DK*P crystal was developed and used as a gas flow simulator. Index of refraction gradients were produced in the cell by the fringing fields of parallel plate electrodes. Calibration curves for the device were obtained for index of refraction gradients in excess of .00025 m.
Chiba, Takeshi; Maeda, Tomoji; Tairabune, Tomohiko; Tomita, Takashi; Sanbe, Atsushi; Takeda, Rika; Kikuchi, Akihiko; Kudo, Kenzo
2017-03-25
Serotonin (5-hydroxytryptamine, 5-HT) plays an important role in milk volume homeostasis in the mammary gland during lactation; 5-HT in milk may also affect infant development. However, there are few reports on 5-HT concentrations in human breast milk. To address this issue, we developed a simple method based on high-performance liquid chromatography with fluorescence detection (HPLC-FD) for measuring 5-HT concentrations in human breast milk. Breast milk samples were provided by four healthy Japanese women. Calibration curves for 5-HT in each sample were prepared with the standard addition method between 5 and 1000 ng/ml, and all had correlation coefficients >0.999. The recovery of 5-HT was 96.1%-101.0%, with a coefficient of variation of 3.39%-8.62%. The range of 5-HT concentrations estimated from the calibration curves was 11.1-51.1 ng/ml. Thus, the HPLC-FD method described here can effectively extract 5-HT from human breast milk with high reproducibility. Copyright © 2017 Elsevier Inc. All rights reserved.
Development and validation of an LC-UV method for the determination of sulfonamides in animal feeds.
Kumar, P; Companyó, R
2012-05-01
A simple LC-UV method was developed for the determination of residues of eight sulfonamides (sulfachloropyridazine, sulfadiazine, sulfadimidine, sulfadoxine, sulfamethoxypyridazine, sulfaquinoxaline, sulfamethoxazole, and sulfadimethoxine) in six types of animal feed. C18, Oasis HLB, Plexa and Plexa PCX stationary phases were assessed for the clean-up step and the latter was chosen as it showed greater efficiency in the clean-up of interferences. Feed samples spiked with sulfonamides at 2 mg/kg were used to assess the trueness (recovery %) and precision of the method. Mean recovery values ranged from 47% to 66%, intra-day precision (RSD %) from 4% to 15% and inter-day precision (RSD %) from 7% to 18% in pig feed. Recoveries and intra-day precisions were also evaluated in rabbit, hen, cow, chicken and piglet feed matrices. Calibration curves with standards prepared in mobile phase and matrix-matched calibration curves were compared and the matrix effects were ascertained. The limits of detection and quantification in the feeds ranged from 74 to 265 µg/kg and from 265 to 868 µg/kg, respectively. Copyright © 2011 John Wiley & Sons, Ltd.
She, Yunlang; Zhao, Lilan; Dai, Chenyang; Ren, Yijiu; Jiang, Gening; Xie, Huikang; Zhu, Huiyuan; Sun, Xiwen; Yang, Ping; Chen, Yongbing; Shi, Shunbin; Shi, Weirong; Yu, Bing; Xie, Dong; Chen, Chang
2017-11-01
To develop and validate a nomogram to estimate the pretest probability of malignancy in Chinese patients with solid solitary pulmonary nodule (SPN). A primary cohort of 1798 patients with pathologically confirmed solid SPNs after surgery was retrospectively studied at five institutions from January 2014 to December 2015. A nomogram based on independent prediction factors of malignant solid SPN was developed. Predictive performance also was evaluated using the calibration curve and the area under the receiver operating characteristic curve (AUC). The mean age of the cohort was 58.9 ± 10.7 years. In univariate and multivariate analysis, age; history of cancer; the log base 10 transformations of serum carcinoembryonic antigen value; nodule diameter; the presence of spiculation, pleural indentation, and calcification remained the predictive factors of malignancy. A nomogram was developed, and the AUC value (0.85; 95%CI, 0.83-0.88) was significantly higher than other three models. The calibration cure showed optimal agreement between the malignant probability as predicted by nomogram and the actual probability. We developed and validated a nomogram that can estimate the pretest probability of malignant solid SPNs, which can assist clinical physicians to select and interpret the results of subsequent diagnostic tests. © 2017 Wiley Periodicals, Inc.
On the relationship between NMR-derived amide order parameters and protein backbone entropy changes
Sharp, Kim A.; O’Brien, Evan; Kasinath, Vignesh; Wand, A. Joshua
2015-01-01
Molecular dynamics simulations are used to analyze the relationship between NMR-derived squared generalized order parameters of amide NH groups and backbone entropy. Amide order parameters (O2NH) are largely determined by the secondary structure and average values appear unrelated to the overall flexibility of the protein. However, analysis of the more flexible subset (O2NH < 0.8) shows that these report both on the local flexibility of the protein and on a different component of the conformational entropy than that reported by the side chain methyl axis order parameters, O2axis. A calibration curve for backbone entropy vs. O2NH is developed which accounts for both correlations between amide group motions of different residues, and correlations between backbone and side chain motions. This calibration curve can be used with experimental values of O2NH changes obtained by NMR relaxation measurements to extract backbone entropy changes, e.g. upon ligand binding. In conjunction with our previous calibration for side chain entropy derived from measured O2axis values this provides a prescription for determination of the total protein conformational entropy changes from NMR relaxation measurements. PMID:25739366
On the relationship between NMR-derived amide order parameters and protein backbone entropy changes.
Sharp, Kim A; O'Brien, Evan; Kasinath, Vignesh; Wand, A Joshua
2015-05-01
Molecular dynamics simulations are used to analyze the relationship between NMR-derived squared generalized order parameters of amide NH groups and backbone entropy. Amide order parameters (O(2) NH ) are largely determined by the secondary structure and average values appear unrelated to the overall flexibility of the protein. However, analysis of the more flexible subset (O(2) NH < 0.8) shows that these report both on the local flexibility of the protein and on a different component of the conformational entropy than that reported by the side chain methyl axis order parameters, O(2) axis . A calibration curve for backbone entropy vs. O(2) NH is developed, which accounts for both correlations between amide group motions of different residues, and correlations between backbone and side chain motions. This calibration curve can be used with experimental values of O(2) NH changes obtained by NMR relaxation measurements to extract backbone entropy changes, for example, upon ligand binding. In conjunction with our previous calibration for side chain entropy derived from measured O(2) axis values this provides a prescription for determination of the total protein conformational entropy changes from NMR relaxation measurements. © 2015 Wiley Periodicals, Inc.
Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Kobayashi, Ikuo
2015-07-01
For X-ray diagnosis, the proper management of the entrance skin dose (ESD) is important. Recently, a small-type optically stimulated luminescence dosimeter (nanoDot OSL dosimeter) was made commercially available by Landauer, and it is hoped that it will be used for ESD measurements in clinical settings. Our objectives in the present study were to propose a method for calibrating the ESD measured with the nanoDot OSL dosimeter and to evaluate its accuracy. The reference ESD is assumed to be based on an air kerma with consideration of a well-known back scatter factor. We examined the characteristics of the nanoDot OSL dosimeter using two experimental conditions: a free air irradiation to derive the air kerma, and a phantom experiment to determine the ESD. For evaluation of the ability to measure the ESD, a calibration curve for the nanoDot OSL dosimeter was determined in which the air kerma and/or the ESD measured with an ionization chamber were used as references. As a result, we found that the calibration curve for the air kerma was determined with an accuracy of 5 %. Furthermore, the calibration curve was applied to the ESD estimation. The accuracy of the ESD obtained was estimated to be 15 %. The origin of these uncertainties was examined based on published papers and Monte-Carlo simulation. Most of the uncertainties were caused by the systematic uncertainty of the reading system and the differences in efficiency corresponding to different X-ray energies.
Noguchi, Akio; Nakamura, Kosuke; Sakata, Kozue; Sato-Fukuda, Nozomi; Ishigaki, Takumi; Mano, Junichi; Takabatake, Reona; Kitta, Kazumi; Teshima, Reiko; Kondo, Kazunari; Nishimaki-Mogami, Tomoko
2016-04-19
A number of genetically modified (GM) maize events have been developed and approved worldwide for commercial cultivation. A screening method is needed to monitor GM maize approved for commercialization in countries that mandate the labeling of foods containing a specified threshold level of GM crops. In Japan, a screening method has been implemented to monitor approved GM maize since 2001. However, the screening method currently used in Japan is time-consuming and requires generation of a calibration curve and experimental conversion factor (C(f)) value. We developed a simple screening method that avoids the need for a calibration curve and C(f) value. In this method, ΔC(q) values between the target sequences and the endogenous gene are calculated using multiplex real-time PCR, and the ΔΔC(q) value between the analytical and control samples is used as the criterion for determining analytical samples in which the GM organism content is below the threshold level for labeling of GM crops. An interlaboratory study indicated that the method is applicable independently with at least two models of PCR instruments used in this study.
Jeong, Chang Wook; Jeong, Seong Jin; Hong, Sung Kyu; Lee, Seung Bae; Ku, Ja Hyeon; Byun, Seok-Soo; Jeong, Hyeon; Kwak, Cheol; Kim, Hyeon Hoe; Lee, Eunsik; Lee, Sang Eun
2012-09-01
To develop and evaluate nomograms to predict the pathological stage of clinically localized prostate cancer after radical prostatectomy in Korean men. We reviewed the medical records of 2041 patients who had clinical stages T1c-T3a prostate cancer and were treated solely with radical prostatectomy at two hospitals. Logistic regressions were carried out to predict organ-confined disease, extraprostatic extension, seminal vesicle invasion, and lymph node metastasis using preoperative variables and resulting nomograms. Internal validations were assessed using the area under the receiver operating characteristic curve and calibration plot, and then external validations were carried out on 129 patients from another hospital. Head-to-head comparisons with 2007 Partin tables and Cancer of the Prostate Risk Assessment score were carried out using the area under the curve and decision curve analysis. The significant predictors for organ-confined disease and extraprostatic extension were clinical stage, prostate-specific antigen, Gleason score and a percent positive core of biopsy. Significant predictors for seminal vesicle invasion were prostate-specific antigen, Gleason score and percent positive core, and those for lymph node metastasis were prostate-specific antigen and percent positive core. The area under the curve of established nomograms for organ-confined disease, extraprostatic extension, seminal vesicle invasion and lymph node metastasis were 0.809, 0.804, 0.889 and 0.838, respectively. The nomograms were well calibrated and externally validated. These nomograms showed significantly higher accuracies and net benefits than two Western tools in Korean men. This is the first study to have developed and fully validated nomograms to predict the pathological stage of prostate cancer in an Asian population. These nomograms might be more accurate and useful for Korean men than other predictive models developed using Western populations. © 2012 The Japanese Urological Association.
Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique
2016-01-01
High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...
2018-02-13
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
NASA Astrophysics Data System (ADS)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.
2018-03-01
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
Dose Calibration of the ISS-RAD Fast Neutron Detector
NASA Technical Reports Server (NTRS)
Zeitlin, C.
2015-01-01
The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.
Nieć, Dawid; Kunicki, Paweł K
2015-10-01
Measurements of plasma concentrations of free normetanephrine (NMN), metanephrine (MN) and methoxytyramine (MTY) constitute the most diagnostically accurate screening test for pheochromocytomas and paragangliomas. The aim of this article is to present the results from a validation of an analytical method utilizing high performance liquid chromatography with coulometric detection (HPLC-CD) for quantifying plasma free NMN, MN and MTY. Additionally, peak integration by height and area and the use of one calibration curve for all batches or individual calibration curve for each batch of samples was explored as to determine the optimal approach with regard to accuracy and precision. The method was validated using charcoal stripped plasma spiked with solutions of NMN, MN, MTY and internal standard (4-hydroxy-3-methoxybenzylamine) with the exception of selectivity which was evaluated by analysis of real plasma samples. Calibration curve performance, accuracy, precision and recovery were determined following both peak-area and peak-height measurements and the obtained results were compared. The most accurate and precise method of calibration was evaluated by analyzing quality control samples at three concentration levels in 30 analytical runs. The detector response was linear over the entire tested concentration range from 10 to 2000pg/mL with R(2)≥0.9988. The LLOQ was 10pg/mL for each analyte of interest. To improve accuracy for measurements at low concentrations, a weighted (1/amount) linear regression model was employed, which resulted in inaccuracies of -2.48 to 9.78% and 0.22 to 7.81% following peak-area and peak-height integration, respectively. The imprecisions ranged from 1.07 to 15.45% and from 0.70 to 11.65% for peak-area and peak-height measurements, respectively. The optimal approach to calibration was the one utilizing an individual calibration curve for each batch of samples and peak-height measurements. It was characterized by inaccuracies ranging from -3.39 to +3.27% and imprecisions from 2.17 to 13.57%. The established HPLC-CD method enables accurate and precise measurements of plasma free NMN, MN and MTY with reasonable selectivity. Preparing calibration curve based on peak-height measurements for each batch of samples yields optimal accuracy and precision. Copyright © 2015. Published by Elsevier B.V.
Effect of Using Extreme Years in Hydrologic Model Calibration Performance
NASA Astrophysics Data System (ADS)
Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.
2017-12-01
Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.
Can hydraulic-modelled rating curves reduce uncertainty in high flow data?
NASA Astrophysics Data System (ADS)
Westerberg, Ida; Lam, Norris; Lyon, Steve W.
2017-04-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show the potential of the hydraulically-modelled curves, particularly where the calibration gaugings are of high quality and cover a wide range of flow conditions.
Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.
Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D
2017-01-17
In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.
Estimation of Ksub Ic from slow bend precracked Charpy specimen strength ratios
NASA Technical Reports Server (NTRS)
Succop, G.; Brown, W. F., Jr.
1976-01-01
Strength ratios are reported which were derived from slow bend tests on 0.25 inch thick precracked Charpy specimens of steels, aluminum alloys, and a titanium alloy for which valid K sub Ic values were established. The strength ratios were used to develop calibration curves typical of those that could be useful in estimating K sub Ic for the purposes of alloy development of quality control.
Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc
2004-03-01
Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.
Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Tests)
NASA Technical Reports Server (NTRS)
Pastor-Barsi, Christine; Allen, Arrington E.
2013-01-01
A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel (IRT) was completed in 2012 following the major modifications to the facility that included replacement of the refrigeration plant and heat exchanger. The calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT.
Water content determination of superdisintegrants by means of ATR-FTIR spectroscopy.
Szakonyi, G; Zelkó, R
2012-04-07
Water contents of superdisintegrant pharmaceutical excipients were determined by attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy using simple linear regression. Water contents of the investigated three common superdisintegrants (crospovidone, croscarmellose sodium, sodium starch glycolate) varied over a wide range (0-24%, w/w). In the case of crospovidone three different samples from two manufacturers were examined in order to study the effects of different grades on the calibration curves. Water content determinations were based on strong absorption of water between 3700 and 2800 cm⁻¹, other spectral changes associated with the different compaction of samples on the ATR crystal using the same pressure were followed by the infrared region between 1510 and 1050 cm⁻¹. The calibration curves were constructed using the ratio of absorbance intensities in the two investigated regions. Using appropriate baseline correction the linearity of the calibration curves was maintained over the entire investigated water content regions and the effect of particle size on the calibration was not significant in the case of crospovidones from the same manufacturer. The described method enables the water content determination of powdered hygroscopic materials containing homogeneously distributed water. Copyright © 2012 Elsevier B.V. All rights reserved.
Monitoring of toxic elements present in sludge of industrial waste using CF-LIBS.
Kumar, Rohit; Rai, Awadhesh K; Alamelu, Devanathan; Aggarwal, Suresh K
2013-01-01
Industrial waste is one of the main causes of environmental pollution. Laser-induced breakdown spectroscopy (LIBS) was applied to detect the toxic metals in the sludge of industrial waste water. Sludge on filter paper was obtained after filtering the collected waste water samples from different sections of a water treatment plant situated in an industrial area of Kanpur City. The LIBS spectra of the sludge samples were recorded in the spectral range of 200 to 500 nm by focusing the laser light on sludge. Calibration-free laser-induced breakdown spectroscopy (CF-LIBS) technique was used for the quantitative measurement of toxic elements such as Cr and Pb present in the sample. We also used the traditional calibration curve approach to quantify these elements. The results obtained from CF-LIBS are in good agreement with the results from the calibration curve approach. Thus, our results demonstrate that CF-LIBS is an appropriate technique for quantitative analysis where reference/standard samples are not available to make the calibration curve. The results of the present experiment are alarming to the people living nearby areas of industrial activities, as the concentrations of toxic elements are quite high compared to the admissible limits of these substances.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2016-04-01
The aim of this study is to investigate how the inclusion of uncertainties in inputs and observed streamflow influence the parameter estimation, streamflow predictions and model evaluation. In particular we wanted to answer the following research questions: • What is the effect of including a random error in the precipitation and temperature inputs? • What is the effect of decreased information about precipitation by excluding the nearest precipitation station? • What is the effect of the uncertainty in streamflow observations? • What is the effect of reduced information about the true streamflow by using a rating curve where the measurement of the highest and lowest streamflow is excluded when estimating the rating curve? To answer these questions, we designed a set of calibration experiments and evaluation strategies. We used the elevation distributed HBV model operating on daily time steps combined with a Bayesian formulation and the MCMC routine Dream for parameter inference. The uncertainties in inputs was represented by creating ensembles of precipitation and temperature. The precipitation ensemble were created using a meta-gaussian random field approach. The temperature ensembles were created using a 3D Bayesian kriging with random sampling of the temperature laps rate. The streamflow ensembles were generated by a Bayesian multi-segment rating curve model. Precipitation and temperatures were randomly sampled for every day, whereas the streamflow ensembles were generated from rating curve ensembles, and the same rating curve was always used for the whole time series in a calibration or evaluation run. We chose a catchment with a meteorological station measuring precipitation and temperature, and a rating curve of relatively high quality. This allowed us to investigate and further test the effect of having less information on precipitation and streamflow during model calibration, predictions and evaluation. The results showed that including uncertainty in the precipitation and temperature input has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Reduced information in precipitation input resulted in a and a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using wrong rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions obtained using a wrong rating curve, the evaluation scores varies depending on the true rating curve. Generally, the best evaluation scores were not achieved for the rating curve used for calibration, but for a rating curves giving low variance in streamflow observations. Reduced information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores giving both better and worse scores. This case study shows that estimating the water balance is challenging since both precipitation inputs and streamflow observations have pronounced systematic component in their uncertainties.
An extended CFD model to predict the pumping curve in low pressure plasma etch chamber
NASA Astrophysics Data System (ADS)
Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu
2014-12-01
Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.
NASA Astrophysics Data System (ADS)
Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.
2017-11-01
Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.
Forzley, Brian; Er, Lee; Chiu, Helen Hl; Djurdjev, Ognjenka; Martinusen, Dan; Carson, Rachel C; Hargrove, Gaylene; Levin, Adeera; Karim, Mohamud
2018-02-01
End-stage kidney disease is associated with poor prognosis. Health care professionals must be prepared to address end-of-life issues and identify those at high risk for dying. A 6-month mortality prediction model for patients on dialysis derived in the United States is used but has not been externally validated. We aimed to assess the external validity and clinical utility in an independent cohort in Canada. We examined the performance of the published 6-month mortality prediction model, using discrimination, calibration, and decision curve analyses. Data were derived from a cohort of 374 prevalent dialysis patients in two regions of British Columbia, Canada, which included serum albumin, age, peripheral vascular disease, dementia, and answers to the "the surprise question" ("Would I be surprised if this patient died within the next year?"). The observed mortality in the validation cohort was 11.5% at 6 months. The prediction model had reasonable discrimination (c-stat = 0.70) but poor calibration (calibration-in-the-large = -0.53 (95% confidence interval: -0.88, -0.18); calibration slope = 0.57 (95% confidence interval: 0.31, 0.83)) in our data. Decision curve analysis showed the model only has added value in guiding clinical decision in a small range of threshold probabilities: 8%-20%. Despite reasonable discrimination, the prediction model has poor calibration in this external study cohort; thus, it may have limited clinical utility in settings outside of where it was derived. Decision curve analysis clarifies limitations in clinical utility not apparent by receiver operating characteristic curve analysis. This study highlights the importance of external validation of prediction models prior to routine use in clinical practice.
Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Joshi, H; Saunderson, J R; Beavis, A W
2016-11-07
The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQ m and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.
NASA Astrophysics Data System (ADS)
Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Joshi, H.; Saunderson, J. R.; Beavis, A. W.
2016-11-01
The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQm and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
The TESS Science Processing Operations Center
NASA Technical Reports Server (NTRS)
Jenkins, Jon; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd;
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth’s closest cousins starting in late 2017. TESS will discover approx.1,000 small planets and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NAS Pleiades supercomputer. The SPOC will search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes.
Nuclear Gauge Calibration and Testing Guidelines for Hawaii
DOT National Transportation Integrated Search
2006-12-15
Project proposal brief: AASHTO and ASTM nuclear gauge testing procedures can lead to misleading density and moisture readings for certain Hawaiian soils. Calibration curves need to be established for these unique materials, along with clear standard ...
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
NASA Astrophysics Data System (ADS)
Hulsman, P.; Bogaard, T.; Savenije, H. H. G.
2016-12-01
In hydrology and water resources management, discharge is the main time series for model calibration. Rating curves are needed to derive discharge from continuously measured water levels. However, assuring their quality is demanding due to dynamic changes and problems in accurately deriving discharge at high flows. This is valid everywhere, but even more in African socio-economic context. To cope with these uncertainties, this study proposes to use water levels instead of discharge data for calibration. Also uncertainties in rainfall measurements, especially the spatial heterogeneity needs to be considered. In this study, the semi-distributed rainfall runoff model FLEX-Topo was applied to the Mara River Basin. In this model seven sub-basins were distinguished and four hydrological response units with each a unique model structure based on the expected dominant flow processes. Parameter and process constrains were applied to exclude unrealistic results. To calibrate the model, the water levels were back-calculated from modelled discharges, using cross-section data and the Strickler formula calibrating parameter `k•s1/2', and compared to measured water levels. The model simulated the water depths well for the entire basin and the Nyangores sub-basin in the north. However, the calibrated and observed rating curves differed significantly at the basin outlet, probably due to uncertainties in the measured discharge, but at Nyangores they were almost identical. To assess the effect of rainfall uncertainties on the hydrological model, the representative rainfall in each sub-basin was estimated with three different methods: 1) single station, 2) average precipitation, 3) areal sub-division using Thiessen polygons. All three methods gave on average similar results, but method 1 resulted in more flashy responses, method 2 dampened the water levels due to averaging the rainfall and method 3 was a combination of both. In conclusion, in the case of unreliable rating curves, water level data can be used instead and a new rating curve can be calibrated. The effect of rainfall uncertainties on the hydrological model was insignificant.
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue
2017-07-01
Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.
Minimizing thermal degradation in gas chromatographic quantitation of pentaerythritol tetranitrate.
Lubrano, Adam L; Field, Christopher R; Newsome, G Asher; Rogers, Duane A; Giordano, Braden C; Johnson, Kevin J
2015-05-15
An analytical method for establishing calibration curves for the quantitation of pentaerythriol tetranitrate (PETN) from sorbent-filled thermal desorption tubes by gas chromatography with electron capture detection (TDS-GC-ECD) was developed. As PETN has been demonstrated to thermally degrade under typical GC instrument conditions, peaks corresponding to both PETN degradants and molecular PETN are observed. The retention time corresponding to intact PETN was verified by high-resolution mass spectrometry with a flowing atmospheric pressure afterglow (FAPA) ionization source, which enabled soft ionization of intact PETN eluting the GC and subsequent accurate-mass identification. The GC separation parameters were transferred to a conventional GC-ECD instrument where analytical method-induced PETN degradation was further characterized and minimized. A method calibration curve was established by direct liquid deposition of PETN standard solutions onto the glass frit at the head of sorbent-filled thermal desorption tubes. Two local, linear relationships between detector response and PETN concentration were observed, with a total dynamic range of 0.25-25ng. Published by Elsevier B.V.
Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene
2011-10-28
This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrison, H; Menon, G; Sloboda, R
The purpose of this study was to investigate the accuracy of radiochromic film calibration procedures used in external beam radiotherapy when applied to I-125 brachytherapy sources delivering higher doses, and to determine any necessary modifications to achieve similar accuracy in absolute dose measurements. GafChromic EBT3 film was used to measure radiation doses upwards of 35 Gy from 6 MV, 75 kVp and (∼28 keV) I-125 photon sources. A custom phantom was used for the I-125 irradiations to obtain a larger film area with nearly constant dose to reduce the effects of film heterogeneities on the optical density (OD) measurements. RGBmore » transmission images were obtained with an Epson 10000XL flatbed scanner, and calibration curves relating OD and dose using a rational function were determined for each colour channel and at each energy using a non-linear least square minimization method. Differences found between the 6 MV calibration curve and those for the lower energy sources are large enough that 6 MV beams should not be used to calibrate film for low-energy sources. However, differences between the 75 kVp and I-125 calibration curves were quite small; indicating that 75 kVp is a good choice. Compared with I-125 irradiation, this gives the advantages of lower type B uncertainties and markedly reduced irradiation time. To obtain high accuracy calibration for the dose range up to 35 Gy, two-segment piece-wise fitting was required. This yielded absolute dose measurement accuracy above 1 Gy of ∼2% for 75 kVp and ∼5% for I-125 seed exposures.« less
NASA Astrophysics Data System (ADS)
Bilardi, S.; Barjatya, A.; Gasdia, F.
OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.
THE USE OF QUENCHING IN A LIQUID SCINTILLATION COUNTER FOR QUANTITATIVE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, G.V.
1963-01-01
Quenching was used to quantitatively determine the amonnt of quenching agent present. A sealed promethium147 source was prepared to be used for the count rate determinations. Two methods to determine the amount of quenching agent present in a sample were developed. One method related the count rate of a sample containing a quenching agent to the amount of quenching agent present. Calibration curves were plotted using both color and chemical quenchers. The quenching agents used were: F.D.C. Orange No. 2, F.D.C. Yellow No. 3, F.D.C. Yellow No. 4, Scarlet Red, acetone, benzaldehyde, and carbon tetrachloride. the color quenchers gave amore » linear-relationship, while the chemical quenchers gave a non-linear relationship. Quantities of the color quenchers between about 0.008 mg and 0.100 mg can be determined with an error less than 5%. The calibration curves were found to be usable over a long period of time. The other method related the change in the ratio of the count rates in two voltage windows to the amount of quenching agent present. The quenchers mentioned above were used. Calibration curves were plotted for both the color and chemical quenchers. The relationships of ratio versus amount of quencher were non-linear in each case. It was shown that the reproducibility of the count rate and the ratio was independent of the amount of quencher present but was dependent on the count rate. At count rates above 10,000 counts per minute the reproducibility was better than 1%. (TCO)« less
He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia
2002-02-01
A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.
X-Ray Fluorescence Determination of the Surface Density of Chromium Nanolayers
NASA Astrophysics Data System (ADS)
Mashin, N. I.; Chernjaeva, E. A.; Tumanova, A. N.; Ershov, A. A.
2014-01-01
An auxiliary system consisting of thin-film layers of chromium deposited on a polymer film substrate is used to construct calibration curves for the relative intensities of the K α lines of chromium on bulk substrates of different elements as functions of the chromium surface density in the reference samples. Correction coefficients are calculated to take into account the absorption of primary radiation from an x-ray tube and analytical lines of the constituent elements of the substrate. A method is developed for determining the surface density of thin films of chromium when test and calibration samples are deposited on substrates of different materials.
NASA Technical Reports Server (NTRS)
Robertson, G.
1982-01-01
Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.
Crispin, Alexander; Strahwald, Brigitte; Cheney, Catherine; Mansmann, Ulrich
2018-06-04
Quality control, benchmarking, and pay for performance (P4P) require valid indicators and statistical models allowing adjustment for differences in risk profiles of the patient populations of the respective institutions. Using hospital remuneration data for measuring quality and modelling patient risks has been criticized by clinicians. Here we explore the potential of prediction models for 30- and 90-day mortality after colorectal cancer surgery based on routine data. Full census of a major statutory health insurer. Surgical departments throughout the Federal Republic of Germany. 4283 and 4124 insurants with major surgery for treatment of colorectal cancer during 2013 and 2014, respectively. Age, sex, primary and secondary diagnoses as well as tumor locations as recorded in the hospital remuneration data according to §301 SGB V. 30- and 90-day mortality. Elixhauser comorbidities, Charlson conditions, and Charlson scores were generated from the ICD-10 diagnoses. Multivariable prediction models were developed using a penalized logistic regression approach (logistic ridge regression) in a derivation set (patients treated in 2013). Calibration and discrimination of the models were assessed in an internal validation sample (patients treated in 2014) using calibration curves, Brier scores, receiver operating characteristic curves (ROC curves) and the areas under the ROC curves (AUC). 30- and 90-day mortality rates in the learning-sample were 5.7 and 8.4%, respectively. The corresponding values in the validation sample were 5.9% and once more 8.4%. Models based on Elixhauser comorbidities exhibited the highest discriminatory power with AUC values of 0.804 (95% CI: 0.776 -0.832) and 0.805 (95% CI: 0.782-0.828) for 30- and 90-day mortality. The Brier scores for these models were 0.050 (95% CI: 0.044-0.056) and 0.067 (95% CI: 0.060-0.074) and similar to the models based on Charlson conditions. Regardless of the model, low predicted probabilities were well calibrated, while higher predicted values tended to be overestimates. The reasonable results regarding discrimination and calibration notwithstanding, models based on hospital remuneration data may not be helpful for P4P. Routine data do not offer information regarding a wide range of quality indicators more useful than mortality. As an alternative, models based on clinical registries may allow a wider, more valid perspective. © Georg Thieme Verlag KG Stuttgart · New York.
Dietrich, Markus; Hagen, Gunter; Reitmeier, Willibald; Burger, Katharina; Hien, Markus; Grass, Philippe; Kubinski, David; Visser, Jaco; Moos, Ralf
2017-11-28
Current developments in exhaust gas aftertreatment led to a huge mistrust in diesel driven passenger cars due to their NO x emissions being too high. The selective catalytic reduction (SCR) with ammonia (NH₃) as reducing agent is the only approach today with the capability to meet upcoming emission limits. Therefore, the radio-frequency-based (RF) catalyst state determination to monitor the NH₃ loading on SCR catalysts has a huge potential in emission reduction. Recent work on this topic proved the basic capability of this technique under realistic conditions on an engine test bench. In these studies, an RF system calibration for the serial type SCR catalyst Cu-SSZ-13 was developed and different approaches for a temperature dependent NH₃ storage were determined. This paper continues this work and uses a fully calibrated RF-SCR system under transient conditions to compare different directly measured and controlled NH₃ storage levels, and NH₃ target curves. It could be clearly demonstrated that the right NH₃ target curve, together with a direct control on the desired level by the RF system, is able to operate the SCR system with the maximum possible NO x conversion efficiency and without NH₃ slip.
Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio
2018-04-06
To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.
Developing an Abaqus *HYPERFOAM Model for M9747 (4003047) Cellular Silicone Foam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siranosian, Antranik A.; Stevens, R. Robert
This report documents work done to develop an Abaqus *HYPERFOAM hyperelastic model for M9747 (4003047) cellular silicone foam for use in quasi-static analyses at ambient temperature. Experimental data, from acceptance tests for 'Pad A' conducted at the Kansas City Plant (KCP), was used to calibrate the model. The data includes gap (relative displacement) and load measurements from three locations on the pad. Thirteen sets of data, from pads with different serial numbers, were provided. The thirty-nine gap-load curves were extracted from the thirteen supplied Excel spreadsheets and analyzed, and from those thirty-nine one set of data, representing a qualitative mean,more » was chosen to calibrate the model. The data was converted from gap and load to nominal (engineering) strain and nominal stress in order to implement it in Abaqus. Strain computations required initial pad thickness estimates. An Abaqus model of a right-circular cylinder was used to evaluate and calibrate the *HYPERFOAM model.« less
Billard, Hélène; Simon, Laure; Desnots, Emmanuelle; Sochard, Agnès; Boscher, Cécile; Riaublanc, Alain; Alexandre-Gouabau, Marie-Cécile; Boquien, Clair-Yves
2016-08-01
Human milk composition analysis seems essential to adapt human milk fortification for preterm neonates. The Miris human milk analyzer (HMA), based on mid-infrared methodology, is convenient for a unique determination of macronutrients. However, HMA measurements are not totally comparable with reference methods (RMs). The primary aim of this study was to compare HMA results with results from biochemical RMs for a large range of protein, fat, and carbohydrate contents and to establish a calibration adjustment. Human milk was fractionated in protein, fat, and skim milk by covering large ranges of protein (0-3 g/100 mL), fat (0-8 g/100 mL), and carbohydrate (5-8 g/100 mL). For each macronutrient, a calibration curve was plotted by linear regression using measurements obtained using HMA and RMs. For fat, 53 measurements were performed, and the linear regression equation was HMA = 0.79RM + 0.28 (R(2) = 0.92). For true protein (29 measurements), the linear regression equation was HMA = 0.9RM + 0.23 (R(2) = 0.98). For carbohydrate (15 measurements), the linear regression equation was HMA = 0.59RM + 1.86 (R(2) = 0.95). A homogenization step with a disruptor coupled to a sonication step was necessary to obtain better accuracy of the measurements. Good repeatability (coefficient of variation < 7%) and reproducibility (coefficient of variation < 17%) were obtained after calibration adjustment. New calibration curves were developed for the Miris HMA, allowing accurate measurements in large ranges of macronutrient content. This is necessary for reliable use of this device in individualizing nutrition for preterm newborns. © The Author(s) 2015.
GIADA: extended calibration activities before the comet encounter
NASA Astrophysics Data System (ADS)
Accolla, Mario; Sordini, Roberto; Della Corte, Vincenzo; Ferrari, Marco; Rotundi, Alessandra
2014-05-01
The Grain Impact Analyzer and Dust Accumulator - GIADA - is one of the payloads on-board Rosetta Orbiter. Its three detection sub-systems are able to measure the speed, the momentum, the mass, the optical cross section of single cometary grains and the dust flux ejected by the periodic comet 67P Churyumov-Gerasimenko. During the Hibernation phase of the Rosetta mission, we have performed a dedicated extended calibration activity on the GIADA Proto Flight Model (accommodated in a clean room in our laboratory) involving two of three sub-systems constituting GIADA, i.e. the Grain Detection System (GDS) and the Impact Sensor (IS). Our aim is to carry out a new set of response curves for these two subsystems and to correlate them with the calibration curves obtained in 2002 for the GIADA payload onboard the Rosetta spacecraft, in order to improve the interpretation of the forthcoming scientific data. For the extended calibration we have dropped or shot into GIADA PFM a statistically relevant number of grains (i.e. about 1 hundred), acting as cometary dust analogues. We have studied the response of the GDS and IS as a function of grain composition, size and velocity. Different terrestrial materials were selected as cometary analogues according to the more recent knowledge gained through the analyses of Interplanetary Dust Particles and cometary samples returned from comet 81P/Wild 2 (Stardust mission). Therefore, for each material, we have produced grains with sizes ranging from 20-500 μm in diameter, that were characterized by FESEM and micro IR spectroscopy. Therefore, the grains were shot into GIADA PFM with speed ranging between 1 and 100 ms-1. Indeed, according to the estimation reported in Fink & Rubin (2012), this range is representative of the dust particle velocity expected at the comet scenario and lies within the GIADA velocity sensitivity (i.e. 1-100 ms-1 for GDSand 1-300 ms-1for GDS+IS 1-300 ms-1). The response curves obtained using the data collected during the GIADA PFM extended calibration will be linked to the on-ground calibration data collected during the instrument qualification campaign (performed both on Flight and Spare Models, in 2002). The final aim is to rescale the Extended Calibration data obtained with the GIADA PFM to GIADA presently onboard the Rosetta spacecraft. In this work we present the experimental procedures and the setup used for the calibration activities, particularly focusing on the new response curves of GDS and IS sub-systems obtained for the different cometary dust analogues. These curves will be critical for the future interpretation of scientific data. Fink, U. & Rubin, M. (2012), The calculation of Afρ and mass loss rate for comets, Icarus, Volume 221, issue 2, p. 721-734
Transducer Workshop (17th) Held in San Diego, California on June 22-24, 1993
1993-06-01
weight in a drop tower, such as the primer tester shown in figure 1. The calibration procedure must be repeated for each lot of copper inserts, and small...force vs. time curve (i.e impulse = area unxer the curve). The FPyF can be used in the primer tester (shown in figure 1) as well as in a weapon...microphones. Plstonphone Output 124 dB, 250 Hz DEAD WEIGHT TESTIER USED AS A PRESSURE RELEASE CALIBRATOR The dead weight tester is designed and most
Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2004 and 2005 Tests)
NASA Technical Reports Server (NTRS)
Arrington, E. Allen; Pastor, Christine M.; Gonsalez, Jose C.; Curry, Monroe R., III
2010-01-01
A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel was completed in 2004 following the replacement of the inlet guide vanes upstream of the tunnel drive system and improvement to the facility total temperature instrumentation. This calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT. The 2004 test was also the first to use the 2-D RTD array, an improved total temperature calibration measurement platform.
NASA Astrophysics Data System (ADS)
Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.
2015-11-01
An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.
Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H
2016-05-30
For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Yi Jun; Mandelis, Andreas; Guo, Xinxin
2015-11-01
In this work, laser-based wavelength-modulated differential photothermal radiometry (WM-DPTR) is applied to develop a non-invasive in-vehicle alcohol biosensor. WM-DPTR features unprecedented ethanol-specificity and sensitivity by suppressing baseline variations through a differential measurement near the peak and baseline of the mid-infrared ethanol absorption spectrum. Biosensor signal calibration curves are obtained from WM-DPTR theory and from measurements in human blood serum and ethanol solutions diffused from skin. The results demonstrate that the WM-DPTR-based calibrated alcohol biosensor can achieve high precision and accuracy for the ethanol concentration range of 0-100 mg/dl. The high-performance alcohol biosensor can be incorporated into ignition interlocks that could be fitted as a universal accessory in vehicles in an effort to reduce incidents of drinking and driving.
Yehia, Ali M; Mohamed, Heba M
2016-01-05
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.
Liu, Yi Jun; Mandelis, Andreas; Guo, Xinxin
2015-11-01
In this work, laser-based wavelength-modulated differential photothermal radiometry (WM-DPTR) is applied to develop a non-invasive in-vehicle alcohol biosensor. WM-DPTR features unprecedented ethanol-specificity and sensitivity by suppressing baseline variations through a differential measurement near the peak and baseline of the mid-infrared ethanol absorption spectrum. Biosensor signal calibration curves are obtained from WM-DPTR theory and from measurements in human blood serum and ethanol solutions diffused from skin. The results demonstrate that the WM-DPTR-based calibrated alcohol biosensor can achieve high precision and accuracy for the ethanol concentration range of 0-100 mg/dl. The high-performance alcohol biosensor can be incorporated into ignition interlocks that could be fitted as a universal accessory in vehicles in an effort to reduce incidents of drinking and driving.
Energy calibration of the fly's eye detector
NASA Technical Reports Server (NTRS)
Baltrusaitis, R. M.; Cassiday, G. L.; Cooper, R.; Elbert, J. W.; Gerhardy, P. R.; Ko, S.; Loh, E. C.; Mizumoto, Y.; Sokolsky, P.; Steck, D.
1985-01-01
The methods used to calibrate the Fly's eye detector to evaluate the energy of EAS are discussed. The energy of extensive air showers (EAS) as seen by the Fly's Eye detector are obtained from track length integrals of observed shower development curves. The energy of the parent cosmic ray primary is estimated by applying corrections to account for undetected energy in the muon, neutrino and hadronic channels. Absolute values for E depend upon the measurement of shower sizes N sub e(x). The following items are necessary to convert apparent optical brightness into intrinsical optical brightness: (1) an assessment of those factors responsible for light production by the relativistic electrons in an EAS and the transmission of light thru the atmosphere, (2) calibration of the optical detection system, and (3) a knowledge of the trajectory of the shower.
Choo, Min Soo; Yoo, Changwon; Cho, Sung Yong; Jeong, Seong Jin; Jeong, Chang Wook; Ku, Ja Hyeon; Oh, Seung-June
2017-04-01
As the elderly population increases, a growing number of patients have lower urinary tract symptom (LUTS)/benign prostatic hyperplasia (BPH). The aim of this study was to develop decision support formulas and nomograms for the prediction of bladder outlet obstruction (BOO) and for BOO-related surgical decision-making, and to validate them in patients with LUTS/BPH. Patient with LUTS/BPH between October 2004 and May 2014 were enrolled as a development cohort. The available variables included age, International Prostate Symptom Score, free uroflowmetry, postvoid residual volume, total prostate volume, and the results of a pressure-flow study. A causal Bayesian network analysis was used to identify relevant parameters. Using multivariate logistic regression analysis, formulas were developed to calculate the probabilities of having BOO and requiring prostatic surgery. Patients between June 2014 and December 2015 were prospectively enrolled for internal validation. Receiver operating characteristic curve analysis, calibration plots, and decision curve analysis were performed. A total of 1,179 male patients with LUTS/BPH, with a mean age of 66.1 years, were included as a development cohort. Another 253 patients were enrolled as an internal validation cohort. Using multivariate logistic regression analysis, 2 and 4 formulas were established to estimate the probabilities of having BOO and requiring prostatic surgery, respectively. Our analysis of the predictive accuracy of the model revealed area under the curve values of 0.82 for BOO and 0.87 for prostatic surgery. The sensitivity and specificity were 53.6% and 87.0% for BOO, and 91.6% and 50.0% for prostatic surgery, respectively. The calibration plot indicated that these prediction models showed a good correspondence. In addition, the decision curve analysis showed a high net benefit across the entire spectrum of probability thresholds. We established nomograms for the prediction of BOO and BOO-related prostatic surgery in patients with LUTS/BPH. Internal validation of the nomograms demonstrated that they predicted both having BOO and requiring prostatic surgery very well.
Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...
2015-06-02
In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less
LeBlanc, André; Michaud, Sarah A; Percy, Andrew J; Hardie, Darryl B; Yang, Juncong; Sinclair, Nicholas J; Proudfoot, Jillaine I; Pistawka, Adam; Smith, Derek S; Borchers, Christoph H
2017-07-07
When quantifying endogenous plasma proteins for fundamental and biomedical research - as well as for clinical applications - precise, reproducible, and robust assays are required. Targeted detection of peptides in a bottom-up strategy is the most common and precise mass spectrometry-based quantitation approach when combined with the use of stable isotope-labeled peptides. However, when measuring protein in plasma, the unknown endogenous levels prevent the implementation of the best calibration strategies, since no blank matrix is available. Consequently, several alternative calibration strategies are employed by different laboratories. In this study, these methods were compared to a new approach using two different stable isotope-labeled standard (SIS) peptide isotopologues for each endogenous peptide to be quantified, enabling an external calibration curve as well as the quality control samples to be prepared in pooled human plasma without interference from endogenous peptides. This strategy improves the analytical performance of the assay and enables the accuracy of the assay to be monitored, which can also facilitate method development and validation.
NASA Astrophysics Data System (ADS)
Terada, Takahide; Yamanaka, Kazuhiro; Suzuki, Atsuro; Tsubota, Yushi; Wu, Wenjing; Kawabata, Ken-ichi
2017-07-01
Ultrasound computed tomography (USCT) is promising for a non-invasive, painless, operator-independent and quantitative system for breast-cancer screening. Assembly error, production tolerance, and aging-degradation variations of the hardwire components, particularly of plane-wave-based USCT systems, may hamper cost effectiveness, precise imaging, and robust operation. The plane wave is transmitted from a ring-shaped transducer array for receiving the signal at a high signal-to-noise-ratio and fast aperture synthesis. There are four signal-delay components: response delays in the transmitters and receivers and propagation delays depending on the positions of the transducer elements and their directivity. We developed a highly precise calibration method for calibrating these delay components and evaluated it with our prototype plane-wave-based USCT system. Our calibration method was found to be effective in reducing delay errors. Gaps and curves were eliminated from the plane wave, and echo images of wires were sharpened in the entire imaging area.
Joyce, Richard; Kuziene, Viktorija; Zou, Xin; Wang, Xueting; Pullen, Frank; Loo, Ruey Leng
2016-01-01
An ultra-performance liquid chromatography quadrupole time of flight mass spectrometry (UPLC-qTOF-MS) method using hydrophilic interaction liquid chromatography was developed and validated for simultaneous quantification of 18 free amino acids in urine with a total acquisition time including the column re-equilibration of less than 18 min per sample. This method involves simple sample preparation steps which consisted of 15 times dilution with acetonitrile to give a final composition of 25 % aqueous and 75 % acetonitrile without the need of any derivatization. The dynamic range for our calibration curve is approximately two orders of magnitude (120-fold from the lowest calibration curve point) with good linearity (r (2) ≥ 0.995 for all amino acids). Good separation of all amino acids as well as good intra- and inter-day accuracy (<15 %) and precision (<15 %) were observed using three quality control samples at a concentration of low, medium and high range of the calibration curve. The limits of detection (LOD) and lower limit of quantification of our method were ranging from approximately 1-300 nM and 0.01-0.5 µM, respectively. The stability of amino acids in the prepared urine samples was found to be stable for 72 h at 4 °C, after one freeze thaw cycle and for up to 4 weeks at -80 °C. We have applied this method to quantify the content of 18 free amino acids in 646 urine samples from a dietary intervention study. We were able to quantify all 18 free amino acids in these urine samples, if they were present at a level above the LOD. We found our method to be reproducible (accuracy and precision were typically <10 % for QCL, QCM and QCH) and the relatively high sample throughput nature of this method potentially makes it a suitable alternative for the analysis of urine samples in clinical setting.
On the long-term stability of calibration standards in different matrices.
Kandić, A; Vukanac, I; Djurašević, M; Novković, D; Šešlak, B; Milošević, Z
2012-09-01
In order to assure Quality Control in accordance with ISO/IEC 17025, it was important, from metrological point of view, to examine the long-term stability of calibration standards previously prepared. Comprehensive reconsideration on efficiency curves with respect to the ageing of calibration standards is presented in this paper. The calibration standards were re-used after a period of 5 years and analysis of the results showed discrepancies in efficiency values. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R
2014-05-07
The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.
NASA Astrophysics Data System (ADS)
Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.
2014-05-01
The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.
Rastkhah, E; Zakeri, F; Ghoranneviss, M; Rajabpour, M R; Farshidpour, M R; Mianji, F; Bayat, M
2016-03-01
An in vitro study of the dose responses of human peripheral blood lymphocytes was conducted with the aim of creating calibrated dose-response curves for biodosimetry measuring up to 4 Gy (0.25-4 Gy) of gamma radiation. The cytokinesis-blocked micronucleus (CBMN) assay was employed to obtain the frequencies of micronuclei (MN) per binucleated cell in blood samples from 16 healthy donors (eight males and eight females) in two age ranges of 20-34 and 35-50 years. The data were used to construct the calibration curves for men and women in two age groups, separately. An increase in micronuclei yield with the dose in a linear-quadratic way was observed in all groups. To verify the applicability of the constructed calibration curve, MN yields were measured in peripheral blood lymphocytes of two real overexposed subjects and three irradiated samples with unknown dose, and the results were compared with dose values obtained from measuring dicentric chromosomes. The comparison of the results obtained by the two techniques indicated a good agreement between dose estimates. The average baseline frequency of MN for the 130 healthy non-exposed donors (77 men and 55 women, 20-60 years old divided into four age groups) ranged from 6 to 21 micronuclei per 1000 binucleated cells. Baseline MN frequencies were higher for women and for the older age group. The results presented in this study point out that the CBMN assay is a reliable, easier and valuable alternative method for biological dosimetry.
A simple topography-driven, calibration-free runoff generation model
NASA Astrophysics Data System (ADS)
Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.
2017-12-01
Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader geoscience studies beyond hydrology.
Ignjatović, Aleksandra; Stojanović, Miodrag; Milošević, Zoran; Anđelković Apostolović, Marija
2017-12-02
The interest in developing risk models in medicine not only is appealing, but also associated with many obstacles in different aspects of predictive model development. Initially, the association of biomarkers or the association of more markers with the specific outcome was proven by statistical significance, but novel and demanding questions required the development of new and more complex statistical techniques. Progress of statistical analysis in biomedical research can be observed the best through the history of the Framingham study and development of the Framingham score. Evaluation of predictive models comes from a combination of the facts which are results of several metrics. Using logistic regression and Cox proportional hazards regression analysis, the calibration test, and the ROC curve analysis should be mandatory and eliminatory, and the central place should be taken by some new statistical techniques. In order to obtain complete information related to the new marker in the model, recently, there is a recommendation to use the reclassification tables by calculating the net reclassification index and the integrated discrimination improvement. Decision curve analysis is a novel method for evaluating the clinical usefulness of a predictive model. It may be noted that customizing and fine-tuning of the Framingham risk score initiated the development of statistical analysis. Clinically applicable predictive model should be a trade-off between all abovementioned statistical metrics, a trade-off between calibration and discrimination, accuracy and decision-making, costs and benefits, and quality and quantity of patient's life.
NASA Astrophysics Data System (ADS)
Zou, Yuan; Shen, Tianxing
2013-03-01
Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.
Soo, Danielle H E; Pendharkar, Sayali A; Jivanji, Chirag J; Gillies, Nicola A; Windsor, John A; Petrov, Maxim S
2017-10-01
Approximately 40% of patients develop abnormal glucose metabolism after a single episode of acute pancreatitis. This study aimed to develop and validate a prediabetes self-assessment screening score for patients after acute pancreatitis. Data from non-overlapping training (n=82) and validation (n=80) cohorts were analysed. Univariate logistic and linear regression identified variables associated with prediabetes after acute pancreatitis. Multivariate logistic regression developed the score, ranging from 0 to 215. The area under the receiver-operating characteristic curve (AUROC), Hosmer-Lemeshow χ 2 statistic, and calibration plots were used to assess model discrimination and calibration. The developed score was validated using data from the validation cohort. The score had an AUROC of 0.88 (95% CI, 0.80-0.97) and Hosmer-Lemeshow χ 2 statistic of 5.75 (p=0.676). Patients with a score of ≥75 had a 94.1% probability of having prediabetes, and were 29 times more likely to have prediabetes than those with a score of <75. The AUROC in the validation cohort was 0.81 (95% CI, 0.70-0.92) and the Hosmer-Lemeshow χ 2 statistic was 5.50 (p=0.599). Model calibration of the score showed good calibration in both cohorts. The developed and validated score, called PERSEUS, is the first instrument to identify individuals who are at high risk of developing abnormal glucose metabolism following an episode of acute pancreatitis. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Environmental Health Monitor: Advanced Development of Temperature Sensor Suite.
1995-07-30
systems was implemented using program code existing at Veritay. The software , written in Microsoft® QuickBASIC, facilitated program changes for...currently unforeseen reason re-calibration is needed, this can be readily * accommodated by a straightforward change in the software program---without...unit. A linear relationship between these differences * was obtained using curve fitting software . The ½/-inch globe to 6-inch globe correlation * was
USDA-ARS?s Scientific Manuscript database
We developed a sensitive mass spectrometry-based method of quantitating the prions present in elk and sheep. Calibration curves relating the area ratios of the selected analyte peptides and their homologous stable isotope labeled internal standards were prepared. This method was compared to the ELIS...
An implantable transducer for measuring tension in an anterior cruciate ligament graft.
Ventura, C P; Wolchok, J; Hull, M L; Howell, S M
1998-06-01
The goal of this study was to develop a new implantable transducer for measuring anterior cruciate ligament (ACL) graft tension postoperatively in patients who have undergone ACL reconstructive surgery. A unique approach was taken of integrating the transducer into a femoral fixation device. To devise a practical in vivo calibration protocol for the fixation device transducer (FDT), several hypotheses were investigated: (1) The use of a cable versus the actual graft as the means for applying load to the FDT during calibration has no significant effect on the accuracy of the FDT tension measurements; (2) the number of flexion angles at which the device is calibrated has no significant effect on the accuracy of the FDT measurements; (3) the friction between the graft and femoral tunnel has no significant effect on measurement accuracy. To provide data for testing these hypotheses, the FDT was first calibrated with both a cable and a graft over the full range of flexion. Then graft tension was measured simultaneously with both the FDT on the femoral side and load cells, which were connected to the graft on the tibial side, as five cadaver knees were loaded externally. Measurements were made with both standard and overdrilled tunnels. The error in the FDT tension measurements was the difference between the graft tension measured by the FDT and the load cells. Results of the statistical analyses showed that neither the means of applying the calibration load, the number of flexion angles used for calibration, nor the tunnel size had a significant effect on the accuracy of the FDT. Thus a cable may be used instead of the graft to transmit loads to the FDT during calibration, thus simplifying the procedure. Accurate calibration requires data from just three flexion angles of 0, 45, and 90 deg and a curve fit to obtain a calibration curve over a continuous range of flexion within the limits of this angle group. Since friction did not adversely affect the measurement accuracy of the FDT, the femoral tunnel can be drilled to match the diameter of the graft and does not need to be overdrilled. Following these procedures, the error in measuring graft tension with the FDT averages less than 10 percent relative to a full-scale load of 257 N.
Williams, Ammon; Bryce, Keith; Phongikaroon, Supathorn
2017-10-01
Pyroprocessing of used nuclear fuel (UNF) has many advantages-including that it is proliferation resistant. However, as part of the process, special nuclear materials accumulate in the electrolyte salt and present material accountability and safeguards concerns. The main motivation of this work was to explore a laser-induced breakdown spectroscopy (LIBS) approach as an online monitoring technique to enhance the material accountability of special nuclear materials in pyroprocessing. In this work, a vacuum extraction method was used to draw the molten salt (CeCl 3 -GdCl 3 -LiCl-KCl) up into 4 mm diameter Pyrex tubes where it froze. The salt was then removed and the solid salt was measured using LIBS and inductively coupled plasma mass spectroscopy (ICP-MS). A total of 36 samples were made that varied the CeCl 3 and GdCl 3 (surrogates for uranium and plutonium, respectively) concentrations from 0.5 wt% to 5 wt%. From these samples, univariate calibration curves for Ce and Gd were generated using peak area and peak intensity methods. For Ce, the Ce 551.1 nm line using the peak area provided the best calibration curve with a limit of detection (LOD) of 0.099 wt% and a root mean squared error of cross-validation (RMSECV) of 0.197 wt%. For Gd, the best curve was generated using the peak intensities of the Gd 564.2 nm line resulting in a LOD of 0.027 wt% and a RMSECV of 0.295 wt%. The RMSECV for the univariate cases were determined using leave-one-out cross-validation. In addition to the univariate calibration curves, partial least squares (PLS) regression was done to develop a calibration model. The PLS models yielded similar results with RMSECV (determined using Venetian blind cross-validation with 17% left out per split) values of 0.30 wt% and 0.29 wt% for Ce and Gd, respectively. This work has shown that solid pyroprocessing salt can be qualitatively and quantitatively monitored using LIBS. This work has the potential of significantly enhancing the material monitoring and safeguards of special nuclear materials in pyroprocessing.
NASA Astrophysics Data System (ADS)
Williams, Ammon Ned
The primary objective of this research is to develop an applied technology and provide an assessment for remotely measuring and analyzing the real time or near real time concentrations of used nuclear fuel (UNF) elements in electroreners (ER). Here, Laser-Induced Breakdown Spectroscopy (LIBS) in UNF pyroprocessing facilities was investigated. LIBS is an elemental analysis method, which is based on the emission from plasma generated by focusing a laser beam into the medium. This technology has been reported to be applicable in solids, liquids (includes molten metals), and gases for detecting elements of special nuclear materials. The advantages of applying the technology for pyroprocessing facilities are: (i) Rapid real-time elemental analysis; (ii) Direct detection of elements and impurities in the system with low limits of detection (LOD); and (iii) Little to no sample preparation is required. One important challenge to overcome is achieving reproducible spectral data over time while being able to accurately quantify fission products, rare earth elements, and actinides in the molten salt. Another important challenge is related to the accessibility of molten salt, which is heated in a heavily insulated, remotely operated furnace in a high radiation environment within an argon gas atmosphere. This dissertation aims to address these challenges and approaches in the following phases with their highlighted outcomes: 1. Aerosol-LIBS system design and aqueous testing: An aerosol-LIBS system was designed around a Collison nebulizer and tested using deionized water with Ce, Gd, and Nd concentrations from 100 ppm to 10,000 ppm. The average %RSD values between the sample repetitions were 4.4% and 3.8% for the Ce and Gd lines, respectively. The univariate calibration curve for Ce using the peak intensities of the Ce 418.660 nm line was recommended and had an R 2 value, LOD, and RMSECV of 0.994, 189 ppm, and 390 ppm, respectively. The recommended Gd calibration curve was generated using the peak areas of the Gd 409.861 nm line and had an R2, LOD, and RMSECV of 0.992, 316 ppm, and 421 ppm, respectively. The partial least squares (PLS) calibration curves yielded similar results with RMSECV of 406 ppm and 417 ppm for the Ce and Gd curves, respectively. 2. High temperature aerosol-LIBS system design and CeCl3 testing: The aerosol-LIBS system was transitioned to a high temperature and used to measure Ce in molten LiCl-KCl salt within a glovebox environment. The concentration range studied was from 0.1 wt% to 5 wt% Ce. Normalization was necessary due to signal degradation over time; however, with the normalization the %RSD values averaged 5% for the mid and upper concentrations studied. The best univariate calibration curve was generated using the peak areas of the Ce 418.660 nm line. The LOD for this line was 148 ppm with the RMSECV of 647 ppm. The PLS calibration curve was made using 7 latent variables (LV) and resulting in the RMSECV of 622 ppm. The LOD value was below the expected rare earth concentration within the ER. 3. Aerosol-LIBS testing using UCl3: Samples containing UCl 3 with concentrations ranging from 0.3 wt% to 5 wt% were measured. The spectral response in this range was linear. The best univariate calibration curves were generated using the peak areas of the U 367.01 nm line and had an R2 value of 0.9917. Here, the LOD was 647 ppm and the RMSECV was 2,290 ppm. The PLS model was substantially better with a RMSECV of 1,110 ppm. The LOD found here is below the expected U concentrations in the ER. The successful completion of this study has demonstrated the feasibility of using an aerosol-LIBS analytical technique to measure rare earth elements and actinides in the pyroprocessing salt.
Radiochromic film calibration for the RQT9 quality beam
NASA Astrophysics Data System (ADS)
Costa, K. C.; Gomez, A. M. L.; Alonso, T. C.; Mourao, A. P.
2017-11-01
When ionizing radiation interacts with matter it generates energy deposition. Radiation dosimetry is important for medical applications of ionizing radiation due to the increasing demand for diagnostic radiology and radiotherapy. Different dosimetry methods are used and each one has its advantages and disadvantages. The film is a dose measurement method that records the energy deposition by the darkening of its emulsion. Radiochromic films have a little visible light sensitivity and respond better to ionizing radiation exposure. The aim of this study is to obtain the resulting calibration curve by the irradiation of radiochromic film strips, making it possible to relate the darkening of the film with the absorbed dose, in order to measure doses in experiments with X-ray beam of 120 kV, in computed tomography (CT). Film strips of GAFCHROMIC XR-QA2 were exposed according to RQT9 reference radiation, which defines an X-ray beam generated from a voltage of 120 kV. Strips were irradiated in "Laboratório de Calibração de Dosímetros do Centro de Desenvolvimento da Tecnologia Nuclear" (LCD / CDTN) at a dose range of 5-30 mGy, corresponding to the range values commonly used in CT scans. Digital images of the irradiated films were analyzed by using the ImageJ software. The darkening responses on film strips according to the doses were observed and they allowed obtaining the corresponding numeric values to the darkening for each specific dose value. From the numerical values of darkening, a calibration curve was obtained, which correlates the darkening of the film strip with dose values in mGy. The calibration curve equation is a simplified method for obtaining absorbed dose values using digital images of radiochromic films irradiated. With the calibration curve, radiochromic films may be applied on dosimetry in experiments on CT scans using X-ray beam of 120 kV, in order to improve CT acquisition image processes.
SU-E-T-137: The Response of TLD-100 in Mixed Fields of Photons and Electrons.
Lawless, M; Junell, S; Hammer, C; DeWerd, L
2012-06-01
Thermoluminescent dosimeters are used routinely for dosimetric measurements of photon and electron fields. However, no work has been published characterizing TLDs for use in combined photon and electron fields. This work investigates the response of TLD-100 (LiF:Mg,Ti) in mixed fields of photon and electron beam qualities. TLDs were irradiated in a 6 MV photon beam, 6 MeV electron beam, and a NIST traceable cobalt-60 beam. TLDs were also irradiated in a mixed field of the electron and photon beams. All irradiations were normalized to absorbed dose to water as defined in the AAPM TG-51 report. The average response per dose (nC/Gy) for each linac beam quality was normalized to the average response per dose of the TLDs irradiated by the cobalt-60 standard.Irradiations were performed in a water tank and a Virtual Water™ phantom. Two TLD dose calibration curves for determining absorbed dose to water were generated using photon and electron field TLD response data. These individual beam quality dose calibration curves were applied to the TLDs irradiated in the mixed field. The TLD response in the mixed field was less sensitive than the response in the photon field and more sensitive than the response in the electron field. TLD determination of dose in the mixed field using the dose calibration curve generated by TLDs irradiated by photons resulted in an underestimation of the delivered dose, while the use of a dose calibration curve generated using electrons resulted in an overestimation of the delivered dose. The relative response of TLD-100 in mixed fields fell consistently between the photon nd electron relative responses. When using TLD-100 in mixed fields, the user must account for this intermediate response to avoid an over- or underestimation of the dose due to calibration in a single photon or electron field. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Suaniti, Ni Made; Manurung, Manuntun
2016-03-01
Gas Chromatography-Mass Spectrometry is used to separate two and more compounds and identify fragment ion specific of biomarker ethanol such as palmitic acid ethyl ester (PAEE), as one of the fatty acid ethyl esters as early detection through conyugated reaction. This study aims to calibrate ethyl palmitate and develop analysis with oleate acid. This methode can be used analysis ethanol and its chemistry biomarker in ethanol sub-acute consumption as analytical forensic toxicology. The result show that ethanol level in urine rats Wistar were 9.21 and decreased 6.59 ppm after 48 hours consumption. Calibration curve of ethyl palmitate was y = 0.2035 x + 1.0465 and R2 = 0.9886. Resolution between ethyl palmitate and oleate were >1.5 as good separation with fragment ion specific was 88 and the retention time was 18 minutes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yi Jun; Mandelis, Andreas, E-mail: mandelis@mie.utoronto.ca; Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3G9
In this work, laser-based wavelength-modulated differential photothermal radiometry (WM-DPTR) is applied to develop a non-invasive in-vehicle alcohol biosensor. WM-DPTR features unprecedented ethanol-specificity and sensitivity by suppressing baseline variations through a differential measurement near the peak and baseline of the mid-infrared ethanol absorption spectrum. Biosensor signal calibration curves are obtained from WM-DPTR theory and from measurements in human blood serum and ethanol solutions diffused from skin. The results demonstrate that the WM-DPTR-based calibrated alcohol biosensor can achieve high precision and accuracy for the ethanol concentration range of 0-100 mg/dl. The high-performance alcohol biosensor can be incorporated into ignition interlocks that couldmore » be fitted as a universal accessory in vehicles in an effort to reduce incidents of drinking and driving.« less
Niraula, Rewati; Norman, Laura A.; Meixner, Thomas; Callegary, James B.
2012-01-01
In most watershed-modeling studies, flow is calibrated at one monitoring site, usually at the watershed outlet. Like many arid and semi-arid watersheds, the main reach of the Santa Cruz watershed, located on the Arizona-Mexico border, is discontinuous for most of the year except during large flood events, and therefore the flow characteristics at the outlet do not represent the entire watershed. Calibration is required at multiple locations along the Santa Cruz River to improve model reliability. The objective of this study was to best portray surface water flow in this semiarid watershed and evaluate the effect of multi-gage calibration on flow predictions. In this study, the Soil and Water Assessment Tool (SWAT) was calibrated at seven monitoring stations, which improved model performance and increased the reliability of flow, in the Santa Cruz watershed. The most sensitive parameters to affect flow were found to be curve number (CN2), soil evaporation and compensation coefficient (ESCO), threshold water depth in shallow aquifer for return flow to occur (GWQMN), base flow alpha factor (Alpha_Bf), and effective hydraulic conductivity of the soil layer (Ch_K2). In comparison, when the model was established with a single calibration at the watershed outlet, flow predictions at other monitoring gages were inaccurate. This study emphasizes the importance of multi-gage calibration to develop a reliable watershed model in arid and semiarid environments. The developed model, with further calibration of water quality parameters will be an integral part of the Santa Cruz Watershed Ecosystem Portfolio Model (SCWEPM), an online decision support tool, to assess the impacts of climate change and urban growth in the Santa Cruz watershed.
Eslamizad, Samira; Yazdanpanah, Hassan; Javidnia, Katayon; Sadeghi, Ramezan; Bayat, Mitra; Shahabipour, Sara; Khalighian, Najmeh; Kobarfard, Farzad
2016-01-01
A fast and simple modified QuEChERS (quick, easy, cheap, rugged and safe) extraction method based on spiked calibration curves and direct sample introduction was developed for determination of Benzo [a] pyrene (BaP) in bread by gas chromatography-mass spectrometry single quadrupole selected ion monitoring (GC/MS-SQ-SIM). Sample preparation includes: extraction of BaP into acetone followed by cleanup with dispersive solid phase extraction. The use of spiked samples for constructing the calibration curve substantially reduced adverse matrix-related effects. The average recovery of BaP at 6 concentration levels was in range of 95-120%. The method was proved to be reproducible with relative standard deviation less than 14.5% for all of the concentration levels. The limit of detection and limit of quantification were 0.3 ng/g and 0.5 ng/g, respectively. Correlation coefficient of 0.997 was obtained for spiked calibration standards over the concentration range of 0.5-20 ng/g. To the best of our knowledge, this is the first time that a QuEChERS method is used for the analysis of BaP in breads. The developed method was used for determination of BaP in 29 traditional (Sangak) and industrial (Senan) bread samples collected from Tehran in 2014. These results showed that two Sangak samples were contaminated with BaP. Therefore, a comprehensive survey for monitoring of BaP in Sangak bread samples seems to be needed. This is the first report concerning contamination of bread samples with BaP in Iran. PMID:27642317
Powder X-ray diffraction method for the quantification of cocrystals in the crystallization mixture.
Padrela, Luis; de Azevedo, Edmundo Gomes; Velaga, Sitaram P
2012-08-01
The solid state purity of cocrystals critically affects their performance. Thus, it is important to accurately quantify the purity of cocrystals in the final crystallization product. The aim of this study was to develop a powder X-ray diffraction (PXRD) quantification method for investigating the purity of cocrystals. The method developed was employed to study the formation of indomethacin-saccharin (IND-SAC) cocrystals by mechanochemical methods. Pure IND-SAC cocrystals were geometrically mixed with 1:1 w/w mixture of indomethacin/saccharin in various proportions. An accurately measured amount (550 mg) of the mixture was used for the PXRD measurements. The most intense, non-overlapping, characteristic diffraction peak of IND-SAC was used to construct the calibration curve in the range 0-100% (w/w). This calibration model was validated and used to monitor the formation of IND-SAC cocrystals by liquid-assisted grinding (LAG). The IND-SAC cocrystal calibration curve showed excellent linearity (R(2) = 0.9996) over the entire concentration range, displaying limit of detection (LOD) and limit of quantification (LOQ) values of 1.23% (w/w) and 3.74% (w/w), respectively. Validation results showed excellent correlations between actual and predicted concentrations of IND-SAC cocrystals (R(2) = 0.9981). The accuracy and reliability of the PXRD quantification method depend on the methods of sample preparation and handling. The crystallinity of the IND-SAC cocrystals was higher when larger amounts of methanol were used in the LAG method. The PXRD quantification method is suitable and reliable for verifying the purity of cocrystals in the final crystallization product.
2010-03-01
is to develop a novel clinical useful delivered-dose verification protocol for modern prostate VMAT using Electronic Portal Imaging Device (EPID...technique. A number of important milestones have been accomplished, which include (i) calibrated CBCT HU vs. electron density curve; (ii...prostate VMAT using Electronic Portal Imaging Device (EPID) and onboard Cone beam Computed Tomography (CBCT). The specific aims of this project
Delahanty, Ryan J; Kaufman, David; Jones, Spencer S
2018-06-01
Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death score has many attractive attributes that address the key barriers to adoption of ICU risk adjustment algorithms and performs comparably to existing human-intensive algorithms. Automated risk adjustment algorithms have the potential to obviate known barriers to adoption such as cost-prohibitive licensing fees and significant direct labor costs. Further evaluation is needed to ensure that the level of performance observed in this study could be achieved at independent sites.
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas
2003-06-01
14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).
NASA Astrophysics Data System (ADS)
de Moraes, Alex Silva; Tech, Lohane; Melquíades, Fábio Luiz; Bastos, Rodrigo Oliveira
2014-11-01
Considering the importance to understand the behavior of the elements on different natural and/or anthropic processes, this study had as objective to verify the accuracy of a multielement analysis method for rocks characterization by using soil standards as calibration reference. An EDXRF equipment was used. The analyses were made on samples doped with known concentration of Mn, Zn, Rb, Sr and Zr, for the obtainment of the calibration curves, and on a certified rock sample to check the accuracy of the analytical curves. Then, a set of rock samples from Rio Bonito, located in Figueira city, Paraná State, Brazil, were analyzed. The concentration values obtained, in ppm, for Mn, Rb, Sr and Zr varied, respectively, from 175 to 1084, 7.4 to 268, 28 to 2247 and 15 to 761.
Boyle, Rebecca R; McLean, Stuart; Brandon, Sue; Pass, Georgia J; Davies, Noel W
2002-11-25
We have developed two solid-phase microextraction (SPME) methods, coupled with gas chromatography, for quantitatively analysing the major Eucalyptus leaf terpene, 1,8-cineole, in both expired air and blood from the common brushtail possum (Trichosurus vulpecula). In-line SPME sampling (5 min at 20 degrees C room temperature) of excurrent air from an expiratory chamber containing a possum dosed orally with 1,8-cineole (50 mg/kg) allowed real-time semi-quantitative measurements reflecting 1,8-cineole blood concentrations. Headspace SPME using 50 microl whole blood collected from possums dosed orally with 1,8-cineole (30 mg/kg) resulted in excellent sensitivity (quantitation limit 1 ng/ml) and reproducibility. Blood concentrations ranged between 1 and 1380 ng/ml. Calibration curves were prepared for two concentration ranges (0.05-10 and 10-400 ng/50 microl) for the analysis of blood concentrations. Both calibration curves were linear (r(2)=0.999 and 0.994, respectively) and the equations for the two concentration ranges were consistent. Copyright 2002 Elsevier Science B.V.
Compact Instruments Measure Helium-Leak Rates
NASA Technical Reports Server (NTRS)
Stout, Stephen; Immer, Christopher
2003-01-01
Compact, lightweight instruments have been developed for measuring small flows of helium and/or detecting helium leaks in solenoid valves when the valves are nominally closed. These instruments do not impede the flows when the valves are nominally open. They can be integrated into newly fabricated valves or retrofitted to previously fabricated valves. Each instrument includes an upstream and a downstream thermistor separated by a heater, plus associated analog and digital heater-control, signal- conditioning, and data-processing circuits. The thermistors and heater are off-the-shelf surface mount components mounted on a circuit board in the flow path. The operation of the instrument is based on a well-established thermal mass-flow-measurement technique: Convection by the flow that one seeks to measure gives rise to transfer of heat from the heater to the downstream thermistor. The temperature difference measured by the thermistors is directly related to the rate of flow. The calibration curve from temperature gradient to helium flow is closely approximated via fifth-order polynomial. A microprocessor that is part of the electronic circuitry implements the calibration curve to compute the flow rate from the thermistor readings.
Pierini, Gastón Darío; Pinto, Victor Hugo A; Maia, Clarissa G C; Fragoso, Wallace D; Reboucas, Julio S; Centurión, María Eugenia; Pistonesi, Marcelo Fabián; Di Nezio, María Susana
2017-11-01
The quantification of zinc in over-the-counter drugs as commercial propolis extracts by molecular fluorescence technique using meso-tetrakis(4-carboxyphenyl)porphyrin (H 2 TCPP 4 ) was developed for the first time. The calibration curve is linear from 6.60 to 100 nmol L -1 of Zn 2+ . The detection and quantification limits were 6.22 nmol L -1 and 19.0 nmol L -1 , respectively. The reproducibility and repeatability calculated as the percentage variation of slopes of seven calibration curves were 6.75% and 4.61%, respectively. Commercial propolis extract samples from four Brazilian states were analyzed and the results (0.329-0.797 mg/100 mL) obtained with this method are in good agreement with that obtained with the Atomic Absorption Spectroscopy (AAS) technique. The method is simple, fast, of low cost and allows the analysis of the samples without pretreatment. Moreover the major advantage is that Zn-porphyrin complex presents fluorescent characteristic promoting the selectivity and sensitivity of the method. Copyright © 2017 John Wiley & Sons, Ltd.
Kosaka, Ryo; Fukuda, Kyohei; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi
2013-01-01
In order to monitor the condition of a patient using a left ventricular assist system (LVAS), blood flow should be measured. However, the reliable determination of blood-flow rate has not been established. The purpose of the present study is to develop a noninvasive blood-flow meter using a curved cannula with zero compensation for an axial flow blood pump. The flow meter uses the centrifugal force generated by the flow rate in the curved cannula. Two strain gauges served as sensors. The first gauges were attached to the curved area to measure static pressure and centrifugal force, and the second gauges were attached to straight area to measure static pressure. The flow rate was determined by the differences in output from the two gauges. The zero compensation was constructed based on the consideration that the flow rate could be estimated during the initial driving condition and the ventricular suction condition without using the flow meter. A mock circulation loop was constructed in order to evaluate the measurement performance of the developed flow meter with zero compensation. As a result, the zero compensation worked effectively for the initial calibration and the zero-drift of the measured flow rate. We confirmed that the developed flow meter using a curved cannula with zero compensation was able to accurately measure the flow rate continuously and noninvasively.
NASA Astrophysics Data System (ADS)
Saat, Ahmad; Hamzah, Zaini; Yusop, Mohammad Fariz; Zainal, Muhd Amiruddin
2010-07-01
Detection efficiency of a gamma-ray spectrometry system is dependent upon among others, energy, sample and detector geometry, volume and density of the samples. In the present study the efficiency calibration curves of newly acquired (August 2008) HPGe gamma-ray spectrometry system was carried out for four sample container geometries, namely Marinelli beaker, disc, cylindrical beaker and vial, normally used for activity determination of gamma-ray from environmental samples. Calibration standards were prepared by using known amount of analytical grade uranium trioxide ore, homogenized in plain flour into the respective containers. The ore produces gamma-rays of energy ranging from 53 keV to 1001 keV. Analytical grade potassium chloride were prepared to determine detection efficiency of 1460 keV gamma-ray emitted by potassium isotope K-40. Plots of detection efficiency against gamma-ray energy for the four sample geometries were found to fit smoothly to a general form of ɛ = AΕa+BΕb, where ɛ is efficiency, Ε is energy in keV, A, B, a and b are constants that are dependent on the sample geometries. All calibration curves showed the presence of a "knee" at about 180 keV. Comparison between the four geometries showed that the efficiency of Marinelli beaker is higher than cylindrical beaker and vial, while cylindrical disk showed the lowest.
Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification
Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...
Holographic Entanglement Entropy, SUSY & Calibrations
NASA Astrophysics Data System (ADS)
Colgáin, Eoin Ó.
2018-01-01
Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.
NASA Astrophysics Data System (ADS)
Zheng, Lijuan; Cao, Fan; Xiu, Junshan; Bai, Xueshi; Motto-Ros, Vincent; Gilon, Nicole; Zeng, Heping; Yu, Jin
2014-09-01
Laser-induced breakdown spectroscopy (LIBS) provides a technique to directly determine metals in viscous liquids and especially in lubricating oils. A specific laser ablation configuration of a thin layer of oil applied on the surface of a pure aluminum target was used to evaluate the analytical figures of merit of LIBS for elemental analysis of lubricating oils. Among the analyzed oils, there were a certified 75cSt blank mineral oil, 8 virgin lubricating oils (synthetic, semi-synthetic, or mineral and of 2 different manufacturers), 5 used oils (corresponding to 5 among the 8 virgin oils), and a cooking oil. The certified blank oil and 4 virgin lubricating oils were spiked with metallo-organic standards to obtain laboratory reference samples with different oil matrix. We first established calibration curves for 3 elements, Fe, Cr, Ni, with the 5 sets of laboratory reference samples in order to evaluate the matrix effect by the comparison among the different oils. Our results show that generalized calibration curves can be built for the 3 analyzed elements by merging the measured line intensities of the 5 sets of spiked oil samples. Such merged calibration curves with good correlation of the merged data are only possible if no significant matrix effect affects the measurements of the different oils. In the second step, we spiked the remaining 4 virgin oils and the cooking oils with Fe, Cr and Ni. The accuracy and the precision of the concentration determination in these prepared oils were then evaluated using the generalized calibration curves. The concentrations of metallic elements in the 5 used lubricating oils were finally determined.
Rahman, Md Musfiqur; Abd El-Aty, A M; Na, Tae-Woong; Park, Joon-Seong; Kabir, Md Humayun; Chung, Hyung Suk; Lee, Han Sol; Shin, Ho-Chul; Shim, Jae-Han
2017-08-15
A simultaneous analytical method was developed for the determination of methiocarb and its metabolites, methiocarb sulfoxide and methiocarb sulfone, in five livestock products (chicken, pork, beef, table egg, and milk) using liquid chromatography-tandem mass spectrometry. Due to the rapid degradation of methiocarb and its metabolites, a quick sample preparation method was developed using acetonitrile and salts followed by purification via dispersive- solid phase extraction (d-SPE). Seven-point calibration curves were constructed separately in each matrix, and good linearity was observed in each matrix-matched calibration curve with a coefficient of determination (R 2 ) ≥ 0.991. The limits of detection and quantification were 0.0016 and 0.005mg/kg, respectively, for all tested analytes in various matrices. The method was validated in triplicate at three fortification levels (equivalent to 1, 2, and 10 times the limit of quantification) with a recovery rate ranging between 76.4-118.0% and a relative standard deviation≤10.0%. The developed method was successfully applied to market samples, and no residues of methiocarb and/or its metabolites were observed in the tested samples. In sum, this method can be applied for the routine analysis of methiocarb and its metabolites in foods of animal origins. Copyright © 2017 Elsevier B.V. All rights reserved.
An Investigation of Acoustic Cavitation Produced by Pulsed Ultrasound
1987-12-01
S~ PVDF Hydrophone Sensitivity Calibration Curves C. DESCRIPTION OF TEST AND CALIBRATION TECHNIQUE We chose the reciprocity technique for calibration...NAVAL POSTGRADUATE SCHOOLN a n Monterey, Calif ornia ITHESIS AN INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED BY PULSED ULTRASOUND by Robert L. Bruce...INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED B~Y PULSED ULTRASOUND !2 PERSONAL AUTHOR(S) .RR~r. g~rtL_ 1DLJN, Rober- ., Jr. 13a TYPE OF REPORT )3b TIME
Dietrich, Markus; Hagen, Gunter; Reitmeier, Willibald; Burger, Katharina; Hien, Markus; Grass, Philippe; Kubinski, David; Visser, Jaco; Moos, Ralf
2017-01-01
Current developments in exhaust gas aftertreatment led to a huge mistrust in diesel driven passenger cars due to their NOx emissions being too high. The selective catalytic reduction (SCR) with ammonia (NH3) as reducing agent is the only approach today with the capability to meet upcoming emission limits. Therefore, the radio-frequency-based (RF) catalyst state determination to monitor the NH3 loading on SCR catalysts has a huge potential in emission reduction. Recent work on this topic proved the basic capability of this technique under realistic conditions on an engine test bench. In these studies, an RF system calibration for the serial type SCR catalyst Cu-SSZ-13 was developed and different approaches for a temperature dependent NH3 storage were determined. This paper continues this work and uses a fully calibrated RF-SCR system under transient conditions to compare different directly measured and controlled NH3 storage levels, and NH3 target curves. It could be clearly demonstrated that the right NH3 target curve, together with a direct control on the desired level by the RF system, is able to operate the SCR system with the maximum possible NOx conversion efficiency and without NH3 slip. PMID:29182589
[Developing a predictive model for the caregiver strain index].
Álvarez-Tello, Margarita; Casado-Mejía, Rosa; Praena-Fernández, Juan Manuel; Ortega-Calvo, Manuel
Patient homecare with multiple morbidities is an increasingly common occurrence. The caregiver strain index is tool in the form of questionnaire that is designed to measure the perceived burden of those who care for their families. The aim of this study is to construct a diagnostic nomogram of informal caregiver burden using data from a predictive model. The model was drawn up using binary logistic regression and the questionnaire items as dichotomous factors. The dependent variable was the final score obtained with the questionnaire but categorised in accordance with that in the literature. Scores between 0 and 6 were labelled as "no" (no caregiver stress) and at or greater than 7 as "yes". The version 3.1.1R statistical software was used. To construct confidence intervals for the ROC curve 2000 boot strap replicates were used. A sample of 67 caregivers was obtained. A diagnosing nomogram was made up with its calibration graph (Brier scaled = 0.686, Nagelkerke R 2 =0.791), and the corresponding ROC curve (area under the curve=0.962). The predictive model generated using binary logistic regression and the nomogram contain four items (1, 4, 5 and 9) of the questionnaire. R plotting functions allow a very good solution for validating a model like this. The area under the ROC curve (0.96; 95% CI: 0.994-0.941) achieves a high discriminative value. Calibration also shows high goodness of fit values, suggesting that it may be clinically useful in community nursing and geriatric establishments. Copyright © 2015 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.
Radiance calibration of the High Altitude Observatory white-light coronagraph on Skylab
NASA Technical Reports Server (NTRS)
Poland, A. I.; Macqueen, R. M.; Munro, R. H.; Gosling, J. T.
1977-01-01
The processing of over 35,000 photographs of the solar corona obtained by the white-light coronograph on Skylab is described. Calibration of the vast amount of data was complicated by temporal effects of radiation fog and latent image loss. These effects were compensated by imaging a calibration step wedge on each data frame. Absolute calibration of the wedge was accomplished through comparison with a set of previously calibrated glass opal filters. Analysis employed average characteristic curves derived from measurements of step wedges from many frames within a given camera half-load. The net absolute accuracy of a given radiance measurement is estimated to be 20%.
The TESS science processing operations center
NASA Astrophysics Data System (ADS)
Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; Smith, Jeffrey C.; Caldwell, Douglas A.; Chacon, A. D.; Henze, Christopher; Heiges, Cory; Latham, David W.; Morgan, Edward; Swade, Daryl; Rinehart, Stephen; Vanderspek, Roland
2016-08-01
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with Rp < 4 R⊕ and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).
A new method for automated dynamic calibration of tipping-bucket rain gauges
Humphrey, M.D.; Istok, J.D.; Lee, J.Y.; Hevesi, J.A.; Flint, A.L.
1997-01-01
Existing methods for dynamic calibration of tipping-bucket rain gauges (TBRs) can be time consuming and labor intensive. A new automated dynamic calibration system has been developed to calibrate TBRs with minimal effort. The system consists of a programmable pump, datalogger, digital balance, and computer. Calibration is performed in two steps: 1) pump calibration and 2) rain gauge calibration. Pump calibration ensures precise control of water flow rates delivered to the rain gauge funnel; rain gauge calibration ensures precise conversion of bucket tip times to actual rainfall rates. Calibration of the pump and one rain gauge for 10 selected pump rates typically requires about 8 h. Data files generated during rain gauge calibration are used to compute rainfall intensities and amounts from a record of bucket tip times collected in the field. The system was tested using 5 types of commercial TBRs (15.2-, 20.3-, and 30.5-cm diameters; 0.1-, 0.2-, and 1.0-mm resolutions) and using 14 TBRs of a single type (20.3-cm diameter; 0.1-mm resolution). Ten pump rates ranging from 3 to 154 mL min-1 were used to calibrate the TBRs and represented rainfall rates between 6 and 254 mm h-1 depending on the rain gauge diameter. All pump calibration results were very linear with R2 values greater than 0.99. All rain gauges exhibited large nonlinear underestimation errors (between 5% and 29%) that decreased with increasing rain gauge resolution and increased with increasing rainfall rate, especially for rates greater than 50 mm h-1. Calibration curves of bucket tip time against the reciprocal of the true pump rate for all rain gauges also were linear with R2 values of 0.99. Calibration data for the 14 rain gauges of the same type were very similar, as indicated by slope values that were within 14% of each other and ranged from about 367 to 417 s mm h-1. The developed system can calibrate TBRs efficiently, accurately, and virtually unattended and could be modified for use with other rain gauge designs. The system is now in routine use to calibrate TBRs in a large rainfall collection network at Yucca Mountain, Nevada.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Papoutsis, Ioannis I; Athanaselis, Sotirios A; Nikolaou, Panagiota D; Pistos, Constantinos M; Spiliopoulou, Chara A; Maravelias, Constantinos P
2010-08-01
Benzodiazepines are used widely in daily clinical practice, due to their multiple pharmacological actions. The frequent problems associated with the wide use of benzodiazepines, as well as the multiple incidents of poisonings, led to the necessity for the development of a precise, sensitive and rapid method for the simultaneous determination of the 23 most commonly used benzodiazepines (diazepam, nordiazepam, oxazepam, bromazepam, alprazolam, lorazepam, medazepam, flurazepam, fludiazepam, tetrazepam, chlordiazepoxide, clobazam, midazolam, flunitrazepam, 7-amino-flunitrazepam, triazolam, prazepam, nimetazepam, nitrazepam, temazepam, lormetazepam, clonazepam, camazepam) in blood. A gas chromatographic method combined with mass spectrometric detection was developed, optimized and validated for the determination of the above substances. This method includes liquid-liquid extraction with chloroform at pH 9 and two stages of derivatization using tetramethylammonium hydroxide and propyliodide (propylation), as well as a mixture of triethylamine:propionic anhydride (propionylation). The recoveries were higher than 74% for all the benzodiazepines. The calibration curves were linear within the dynamic range of each benzodiazepine with a correlation coefficient higher than 0.9981. The limits of detection and quantification for each analyte were statistically calculated from the relative calibration curves. Accuracy and precision were also calculated and were found to be less than 8.5% and 11.1%, respectively. The developed method was successfully applied for the investigation of both forensic and clinical toxicological cases of accidental and suicidal poisoning. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Investigating quantitation of phosphorylation using MALDI-TOF mass spectrometry.
Parker, Laurie; Engel-Hall, Aaron; Drew, Kevin; Steinhardt, George; Helseth, Donald L; Jabon, David; McMurry, Timothy; Angulo, David S; Kron, Stephen J
2008-04-01
Despite advances in methods and instrumentation for analysis of phosphopeptides using mass spectrometry, it is still difficult to quantify the extent of phosphorylation of a substrate because of physiochemical differences between unphosphorylated and phosphorylated peptides. Here we report experiments to investigate those differences using MALDI-TOF mass spectrometry for a set of synthetic peptides by creating calibration curves of known input ratios of peptides/phosphopeptides and analyzing their resulting signal intensity ratios. These calibration curves reveal subtleties in sequence-dependent differences for relative desorption/ionization efficiencies that cannot be seen from single-point calibrations. We found that the behaviors were reproducible with a variability of 5-10% for observed phosphopeptide signal. Although these data allow us to begin addressing the issues related to modeling these properties and predicting relative signal strengths for other peptide sequences, it is clear that this behavior is highly complex and needs to be further explored. John Wiley & Sons, Ltd
Investigating quantitation of phosphorylation using MALDI-TOF mass spectrometry
Parker, Laurie; Engel-Hall, Aaron; Drew, Kevin; Steinhardt, George; Helseth, Donald L.; Jabon, David; McMurry, Timothy; Angulo, David S.; Kron, Stephen J.
2010-01-01
Despite advances in methods and instrumentation for analysis of phosphopeptides using mass spectrometry, it is still difficult to quantify the extent of phosphorylation of a substrate due to physiochemical differences between unphosphorylated and phosphorylated peptides. Here we report experiments to investigate those differences using MALDI-TOF mass spectrometry for a set of synthetic peptides by creating calibration curves of known input ratios of peptides/phosphopeptides and analyzing their resulting signal intensity ratios. These calibration curves reveal subtleties in sequence-dependent differences for relative desorption/ionization efficiencies that cannot be seen from single-point calibrations. We found that the behaviors were reproducible with a variability of 5–10% for observed phosphopeptide signal. Although these data allow us to begin addressing the issues related to modeling these properties and predicting relative signal strengths for other peptide sequences, it is clear this behavior is highly complex and needs to be further explored. PMID:18064576
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
Debode, Frédéric; Marien, Aline; Janssen, Eric; Berben, Gilbert
2010-03-01
Five double-target multiplex plasmids to be used as calibrants for GMO quantification were constructed. They were composed of two modified targets associated in tandem in the same plasmid: (1) a part of the soybean lectin gene and (2) a part of the transgenic construction of the GTS40-3-2 event. Modifications were performed in such a way that each target could be amplified with the same primers as those for the original target from which they were derived but such that each was specifically detected with an appropriate probe. Sequence modifications were done to keep the parameters of the new target as similar as possible to those of its original sequence. The plasmids were designed to be used either in separate reactions or in multiplex reactions. Evidence is given that with each of the five different plasmids used in separate wells as a calibrant for a different copy number, a calibration curve can be built. When the targets were amplified together (in multiplex) and at different concentrations inside the same well, the calibration curves showed that there was a competition effect between the targets and this limits the range of copy numbers for calibration over a maximum of 2 orders of magnitude. Another possible application of multiplex plasmids is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, David R.; Brown, Richard S.; Lepla, Ken
One of the practical problems with quantifying the amount of energy used by fish implanted with electromyogram (EMG) radio transmitters is that the signals emitted by the transmitter provide only a relative index of activity unless they are calibrated to the swimming speed of the fish. Ideally calibration would be conducted for each fish before it is released, but this is often not possible and calibration curves derived from more than one fish are used to interpret EMG signals from individuals which have not been calibrated. We tested the validity of this approach by comparing EMG data within three groupsmore » of three wild juvenile white sturgeon Acipenser transmontanus implanted with the same EMG radio transmitter. We also tested an additional six fish which were implanted with separate EMG transmitters. Within each group, a single EMG radio transmitter usually did not produce similar results in different fish. Grouping EMG signals among fish produced less accurate results than having individual EMG-swim speed relationships for each fish. It is unknown whether these differences were a result of different swimming performances among individual fish or inconsistencies in the placement or function of the EMG transmitters. In either case, our results suggest that caution should be used when applying calibration curves from one group of fish to another group of uncalibrated fish.« less
An update on 'dose calibrator' settings for nuclides used in nuclear medicine.
Bergeron, Denis E; Cessna, Jeffrey T
2018-06-01
Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.
Rangeland biomass estimation demonstration. [Texas Experimenta Ranch
NASA Technical Reports Server (NTRS)
Newton, R. W. (Principal Investigator); Boyd, W. E.; Clark, B. V.
1982-01-01
Because of their sensitivity to chlorophyll density, green leaf density, and leaf water density, two hand-held radiometers which have sensor bands coinciding with thematic mapper bands 3, 4, and 5 were used to calibrate green biomass to LANDSAT spectral ratios as a step towards using portable radiometers to speed up ground data acquisition. Two field reflectance panels monitored incoming radiation concurrently with sampling. Software routines were developed and used to extract data from uncorrected tapes of MSS data provided in NASA LANDSAT universal format. A LANDSAT biomass calibration curve estimated the range biomass over a four scene area and displayed this information spatially as a product in a format of use to ranchers. The regional biomass contour map is discussed.
The use of megavoltage CT (MVCT) images for dose recomputations
NASA Astrophysics Data System (ADS)
Langen, K. M.; Meeks, S. L.; Poole, D. O.; Wagner, T. H.; Willoughby, T. R.; Kupelian, P. A.; Ruchala, K. J.; Haimerl, J.; Olivera, G. H.
2005-09-01
Megavoltage CT (MVCT) images of patients are acquired daily on a helical tomotherapy unit (TomoTherapy, Inc., Madison, WI). While these images are used primarily for patient alignment, they can also be used to recalculate the treatment plan for the patient anatomy of the day. The use of MVCT images for dose computations requires a reliable CT number to electron density calibration curve. In this work, we tested the stability of the MVCT numbers by determining the variation of this calibration with spatial arrangement of the phantom, time and MVCT acquisition parameters. The two calibration curves that represent the largest variations were applied to six clinical MVCT images for recalculations to test for dosimetric uncertainties. Among the six cases tested, the largest difference in any of the dosimetric endpoints was 3.1% but more typically the dosimetric endpoints varied by less than 2%. Using an average CT to electron density calibration and a thorax phantom, a series of end-to-end tests were run. Using a rigid phantom, recalculated dose volume histograms (DVHs) were compared with plan DVHs. Using a deformed phantom, recalculated point dose variations were compared with measurements. The MVCT field of view is limited and the image space outside this field of view can be filled in with information from the planning kVCT. This merging technique was tested for a rigid phantom. Finally, the influence of the MVCT slice thickness on the dose recalculation was investigated. The dosimetric differences observed in all phantom tests were within the range of dosimetric uncertainties observed due to variations in the calibration curve. The use of MVCT images allows the assessment of daily dose distributions with an accuracy that is similar to that of the initial kVCT dose calculation.
Three-dimensional digitizer for the footwear industry
NASA Astrophysics Data System (ADS)
Gonzalez, Francisco; Campoy, Pascual; Aracil, Rafael; Penafiel, Francisco; Sebastian, Jose M.
1993-12-01
This paper presents a developed system for digitizing 3D objects in the footwear industry (e.g. mould, soles, heels) and their introduction in a CAD system for further manipulation and production of rapid prototypes. The system is based on the acquisition of the sequence of images of the projection of a laser line onto the 3D object when this is moving in front of the laser beam and the camera. This beam projection lights a 3D curve on the surface of the object, whose image is processed in order to obtain the 3D coordinates of every point of mentioned curve according to a previous calibration of the system. These coordinates of points in all the curves are analyzed and combined in order to make up a 3D wire-frame model of the object, which is introduced in a CAD station for further design and connection to the machinery for rapid prototyping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Michael; Battista, Jerry; Chen, Jeff
Purpose: To develop a radiotherapy dose tracking and plan evaluation technique using cone-beam computed tomography (CBCT) images. Methods: We developed a patient-specific method of calibrating CBCT image sets for dose calculation. The planning CT was first registered with the CBCT using deformable image registration (DIR). A scatter plot was generated between the CT numbers of the planning CT and CBCT for each slice. The CBCT calibration curve was obtained by least-square fitting of the data, and applied to each CBCT slice. The calibrated CBCT was then merged with original planning CT to extend the small field of view of CBCT.more » Finally, the treatment plan was copied to the merged CT for dose tracking and plan evaluation. The proposed patient-specific calibration method was also compared to two methods proposed in literature. To evaluate the accuracy of each technique, 15 head-and-neck patients requiring plan adaptation were arbitrarily selected from our institution. The original plan was calculated on each method’s data set, including a second planning CT acquired within 48 hours of the CBCT (serving as gold standard). Clinically relevant dose metrics and 3D gamma analysis of dose distributions were compared between the different techniques. Results: Compared to the gold standard of using planning CTs, the patient-specific CBCT calibration method was shown to provide promising results with gamma pass rates above 95% and average dose metric agreement within 2.5%. Conclusions: The patient-specific CBCT calibration method could potentially be used for on-line dose tracking and plan evaluation, without requiring a re-planning CT session.« less
IMRT plan verification with EBT2 and EBT3 films compared to PTW 2D-ARRAY seven29
NASA Astrophysics Data System (ADS)
Hanušová, Tereza; Horáková, Ivana; Koniarová, Irena
2017-11-01
The aim of this study was to compare dosimetry with Gafchromic EBT2 and EBT3 films to the ion chamber array PTW seven29 in terms of their performance in clinical IMRT plan verification. A methodology for film processing and calibration was developed. Calibration curves were obtained in MATLAB and in FilmQA Pro. The best calibration curve was then used to calibrate EBT2 and EBT3 films for IMRT plan verification measurements. Films were placed in several coronal planes into an RW3 slab phantom and irradiated with a clinical IMRT plan for prostate and lymph nodes using 18 MV photon beams. Individual fields were tested and irradiated with gantry at 0°. Results were evaluated using gamma analysis with 3%/3 mm criteria in OmniPro I'mRT version 1.7. The same measurements were performed with the ion chamber array PTW seven29 in RW3 slabs (different depths) and in the OCTAVIUS II phantom (isocenter depth only; both original and nominal gantry angles). Results were evaluated in PTW VeriSoft version 3.1 using the same criteria. Altogether, 45 IMRT planes were tested with film and 25 planes with the PTW 2D-ARRAY seven29. Film measuerements showed different results than ion chamber matrix measurements. With PTW 2D-ARRAY seven29, worse results were obtained when the detector was placed into the OCTAVIUS phantom than into the RW3 slab phantom, and the worst pass rates were seen for rotational measurements. EBT2 films showed inconsistent results and could differ significantly for different planes in one field. EBT3 films seemed to give the best results of all the tested configurations.
NASA Astrophysics Data System (ADS)
Lu, Yuzhen; Lu, Renfu
2017-05-01
Three-dimensional (3-D) shape information is valuable for fruit quality evaluation. This study was aimed at developing phase analysis techniques for reconstruction of the 3-D surface of fruit from the pattern images acquired by a structuredillumination reflectance imaging (SIRI) system. Phase-shifted sinusoidal patterns, distorted by the fruit geometry, were acquired and processed through phase demodulation, phase unwrapping and other post-processing procedures to obtain phase difference maps relative to the phase of a reference plane. The phase maps were then transformed into height profiles and 3-D shapes in a world coordinate system based on phase-to-height and in-plane calibrations. A reference plane-based approach, coupled with the curve fitting technique using polynomials of order 3 or higher, was utilized for phase-to-height calibrations, which achieved superior accuracies with the root-mean-squared errors (RMSEs) of 0.027- 0.033 mm for a height measurement range of 0-91 mm. The 3rd-order polynomial curve fitting technique was further tested on two reference blocks with known heights, resulting in relative errors of 3.75% and 4.16%. In-plane calibrations were performed by solving a linear system formed by a number of control points in a calibration object, which yielded a RMSE of 0.311 mm. Tests of the calibrated system for reconstructing the surface of apple samples showed that surface concavities (i.e., stem/calyx regions) could be easily discriminated from bruises from the phase difference maps, reconstructed height profiles and the 3-D shape of apples. This study has laid a foundation for using SIRI for 3-D shape measurement, and thus expanded the capability of the technique for quality evaluation of horticultural products. Further research is needed to utilize the phase analysis techniques for stem/calyx detection of apples, and optimize the phase demodulation and unwrapping algorithms for faster and more reliable detection.
A scattering methodology for droplet sizing of e-cigarette aerosols.
Pratte, Pascal; Cosandey, Stéphane; Goujon-Ginglinger, Catherine
2016-10-01
Knowledge of the droplet size distribution of inhalable aerosols is important to predict aerosol deposition yield at various respiratory tract locations in human. Optical methodologies are usually preferred over the multi-stage cascade impactor for high-throughput measurements of aerosol particle/droplet size distributions. Evaluate the Laser Aerosol Spectrometer technology based on Polystyrene Sphere Latex (PSL) calibration curve applied for the experimental determination of droplet size distributions in the diameter range typical of commercial e-cigarette aerosols (147-1361 nm). This calibration procedure was tested for a TSI Laser Aerosol Spectrometer (LAS) operating at a wavelength of 633 nm and assessed against model di-ethyl-hexyl-sebacat (DEHS) droplets and e-cigarette aerosols. The PSL size response was measured, and intra- and between-day standard deviations calculated. DEHS droplet sizes were underestimated by 15-20% by the LAS when the PSL calibration curve was used; however, the intra- and between-day relative standard deviations were < 3%. This bias is attributed to the fact that the index of refraction of PSL calibrated particles is different in comparison to test aerosols. This 15-20% does not include the droplet evaporation component, which may reduce droplet size prior a measurement is performed. Aerosol concentration was measured accurately with a maximum uncertainty of 20%. Count median diameters and mass median aerodynamic diameters of selected e-cigarette aerosols ranged from 130-191 nm to 225-293 nm, respectively, similar to published values. The LAS instrument can be used to measure e-cigarette aerosol droplet size distributions with a bias underestimating the expected value by 15-20% when using a precise PSL calibration curve. Controlled variability of DEHS size measurements can be achieved with the LAS system; however, this method can only be applied to test aerosols having a refractive index close to that of PSL particles used for calibration.
Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C
2018-05-16
A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.
Measurement and models of bent KAP(001) crystal integrated reflectivity and resolution (invited)
NASA Astrophysics Data System (ADS)
Loisel, G. P.; Wu, M.; Stolte, W.; Kruschwitz, C.; Lake, P.; Dunham, G. S.; Bailey, J. E.; Rochau, G. A.
2016-11-01
The Advanced Light Source beamline-9.3.1 x-rays are used to calibrate the rocking curve of bent potassium acid phthalate (KAP) crystals in the 2.3-4.5 keV photon-energy range. Crystals are bent on a cylindrically convex substrate with a radius of curvature ranging from 2 to 9 in. and also including the flat case to observe the effect of bending on the KAP spectrometric properties. As the bending radius increases, the crystal reflectivity converges to the mosaic crystal response. The X-ray Oriented Programs (xop) multi-lamellar model of bent crystals is used to model the rocking curve of these crystals and the calibration data confirm that a single model is adequate to reproduce simultaneously all measured integrated reflectivities and rocking-curve FWHM for multiple radii of curvature in both 1st and 2nd order of diffraction.
The Sloan Digital Sky Survey-II: Photometry and Supernova Ia Light Curves from the 2005 Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holtzman, Jon A.; /New Mexico State U.; Marriner, John
2010-08-26
We present ugriz light curves for 146 spectroscopically confirmed or spectroscopically probable Type Ia supernovae from the 2005 season of the SDSS-II Supernova survey. The light curves have been constructed using a photometric technique that we call scene modeling, which is described in detail here; the major feature is that supernova brightnesses are extracted from a stack of images without spatial resampling or convolution of the image data. This procedure produces accurate photometry along with accurate estimates of the statistical uncertainty, and can be used to derive photometry taken with multiple telescopes. We discuss various tests of this technique thatmore » demonstrate its capabilities. We also describe the methodology used for the calibration of the photometry, and present calibrated magnitudes and fluxes for all of the spectroscopic SNe Ia from the 2005 season.« less
Measurement and models of bent KAP(001) crystal integrated reflectivity and resolution (invited)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loisel, G. P., E-mail: gploise@sandia.gov; Wu, M.; Lake, P.
2016-11-15
The Advanced Light Source beamline-9.3.1 x-rays are used to calibrate the rocking curve of bent potassium acid phthalate (KAP) crystals in the 2.3-4.5 keV photon-energy range. Crystals are bent on a cylindrically convex substrate with a radius of curvature ranging from 2 to 9 in. and also including the flat case to observe the effect of bending on the KAP spectrometric properties. As the bending radius increases, the crystal reflectivity converges to the mosaic crystal response. The X-ray Oriented Programs (XOP) multi-lamellar model of bent crystals is used to model the rocking curve of these crystals and the calibration datamore » confirm that a single model is adequate to reproduce simultaneously all measured integrated reflectivities and rocking-curve FWHM for multiple radii of curvature in both 1st and 2nd order of diffraction.« less
Hondrogiannis, Ellen M; Ehrlinger, Erin; Poplaski, Alyssa; Lisle, Meredith
2013-11-27
A total of 11 elements found in 25 vanilla samples from Uganda, Madagascar, Indonesia, and Papua New Guinea were measured by laser ablation-inductively coupled plasma-time-of-flight-mass spectrometry (LA-ICP-TOF-MS) for the purpose of collecting data that could be used to discriminate among the origins. Pellets were prepared of the samples, and elemental concentrations were obtained on the basis of external calibration curves created using five National Institute of Standards and Technology (NIST) standards and one Chinese standard with (13)C internal standardization. These curves were validated using NIST 1573a (tomato leaves) as a check standard. Discriminant analysis was used to successfully classify the vanilla samples by their origin. Our method illustrates the feasibility of using LA-ICP-TOF-MS with an external calibration curve for high-throughput screening of spice screening analysis.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
Assessing the hydrologic response to wildfires in mountainous regions
NASA Astrophysics Data System (ADS)
Havel, Aaron; Tasdighi, Ali; Arabi, Mazdak
2018-04-01
This study aims to understand the hydrologic responses to wildfires in mountainous regions at various spatial scales. The Soil and Water Assessment Tool (SWAT) was used to evaluate the hydrologic responses of the upper Cache la Poudre Watershed in Colorado to the 2012 High Park and Hewlett wildfire events. A baseline SWAT model was established to simulate the hydrology of the study area between the years 2000 and 2014. A procedure involving land use and curve number updating was implemented to assess the effects of wildfires. Application of the proposed procedure provides the ability to simulate the hydrologic response to wildfires seamlessly through mimicking the dynamic of the changes due to wildfires. The wildfire effects on curve numbers were determined comparing the probability distribution of curve numbers after calibrating the model for pre- and post-wildfire conditions. Daily calibration and testing of the model produced very good
results. No-wildfire and wildfire scenarios were created and compared to quantify changes in average annual total runoff volume, water budgets, and full streamflow statistics at different spatial scales. At the watershed scale, wildfire conditions showed little impact on the hydrologic responses. However, a runoff increase up to 75 % was observed between the scenarios in sub-watersheds with high burn intensity. Generally, higher surface runoff and decreased subsurface flow were observed under post-wildfire conditions. Flow duration curves developed for burned sub-watersheds using full streamflow statistics showed that less frequent streamflows become greater in magnitude. A linear regression model was developed to assess the relationship between percent burned area and runoff increase in Cache la Poudre Watershed. A strong (R2 > 0.8) and significant (p < 0.001) positive correlation was determined between runoff increase and percentage of burned area upstream. This study showed that the effects of wildfires on hydrology of a watershed are scale-dependent. Also, using full streamflow statistics through application of flow duration curves revealed that the wildfires had a higher effect on peak flows, which may increase the risk of flash floods in post-wildfire conditions.
Assessing and calibrating the ATR-FTIR approach as a carbonate rock characterization tool
NASA Astrophysics Data System (ADS)
Henry, Delano G.; Watson, Jonathan S.; John, Cédric M.
2017-01-01
ATR-FTIR (attenuated total reflectance Fourier transform infrared) spectroscopy can be used as a rapid and economical tool for qualitative identification of carbonates, calcium sulphates, oxides and silicates, as well as quantitatively estimating the concentration of minerals. Over 200 powdered samples with known concentrations of two, three, four and five phase mixtures were made, then a suite of calibration curves were derived that can be used to quantify the minerals. The calibration curves in this study have an R2 that range from 0.93-0.99, a RMSE (root mean square error) of 1-5 wt.% and a maximum error of 3-10 wt.%. The calibration curves were used on 35 geological samples that have previously been studied using XRD (X-ray diffraction). The identification of the minerals using ATR-FTIR is comparable with XRD and the quantitative results have a RMSD (root mean square deviation) of 14% and 12% for calcite and dolomite respectively when compared to XRD results. ATR-FTIR is a rapid technique (identification and quantification takes < 5 min) that involves virtually no cost if the machine is available. It is a common tool in most analytical laboratories, but it also has the potential to be deployed on a rig for real-time data acquisition of the mineralogy of cores and rock chips at the surface as there is no need for special sample preparation, rapid data collection and easy analysis.
Risk scores for outcome in bacterial meningitis: Systematic review and external validation study.
Bijlsma, Merijn W; Brouwer, Matthijs C; Bossuyt, Patrick M; Heymans, Martijn W; van der Ende, Arie; Tanck, Michael W T; van de Beek, Diederik
2016-11-01
To perform an external validation study of risk scores, identified through a systematic review, predicting outcome in community-acquired bacterial meningitis. MEDLINE and EMBASE were searched for articles published between January 1960 and August 2014. Performance was evaluated in 2108 episodes of adult community-acquired bacterial meningitis from two nationwide prospective cohort studies by the area under the receiver operating characteristic curve (AUC), the calibration curve, calibration slope or Hosmer-Lemeshow test, and the distribution of calculated risks. Nine risk scores were identified predicting death, neurological deficit or death, or unfavorable outcome at discharge in bacterial meningitis, pneumococcal meningitis and invasive meningococcal disease. Most studies had shortcomings in design, analyses, and reporting. Evaluation showed AUCs of 0.59 (0.57-0.61) and 0.74 (0.71-0.76) in bacterial meningitis, 0.67 (0.64-0.70) in pneumococcal meningitis, and 0.81 (0.73-0.90), 0.82 (0.74-0.91), 0.84 (0.75-0.93), 0.84 (0.76-0.93), 0.85 (0.75-0.95), and 0.90 (0.83-0.98) in meningococcal meningitis. Calibration curves showed adequate agreement between predicted and observed outcomes for four scores, but statistical tests indicated poor calibration of all risk scores. One score could be recommended for the interpretation and design of bacterial meningitis studies. None of the existing scores performed well enough to recommend routine use in individual patient management. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Veldhuijzen van Zanten, Sophie E M; Lane, Adam; Heymans, Martijn W; Baugh, Joshua; Chaney, Brooklyn; Hoffman, Lindsey M; Doughman, Renee; Jansen, Marc H A; Sanchez, Esther; Vandertop, William P; Kaspers, Gertjan J L; van Vuurden, Dannis G; Fouladi, Maryam; Jones, Blaise V; Leach, James
2017-08-01
We aimed to perform external validation of the recently developed survival prediction model for diffuse intrinsic pontine glioma (DIPG), and discuss its utility. The DIPG survival prediction model was developed in a cohort of patients from the Netherlands, United Kingdom and Germany, registered in the SIOPE DIPG Registry, and includes age <3 years, longer symptom duration and receipt of chemotherapy as favorable predictors, and presence of ring-enhancement on MRI as unfavorable predictor. Model performance was evaluated by analyzing the discrimination and calibration abilities. External validation was performed using an unselected cohort from the International DIPG Registry, including patients from United States, Canada, Australia and New Zealand. Basic comparison with the results of the original study was performed using descriptive statistics, and univariate- and multivariable regression analyses in the validation cohort. External validation was assessed following a variety of analyses described previously. Baseline patient characteristics and results from the regression analyses were largely comparable. Kaplan-Meier curves of the validation cohort reproduced separated groups of standard (n = 39), intermediate (n = 125), and high-risk (n = 78) patients. This discriminative ability was confirmed by similar values for the hazard ratios across these risk groups. The calibration curve in the validation cohort showed a symmetric underestimation of the predicted survival probabilities. In this external validation study, we demonstrate that the DIPG survival prediction model has acceptable cross-cohort calibration and is able to discriminate patients with short, average, and increased survival. We discuss how this clinico-radiological model may serve a useful role in current clinical practice.
NASA Astrophysics Data System (ADS)
Belyashova, N. N.; Shacilov, V. I.; Mikhailova, N. N.; Komarov, I. I.; Sinyova, Z. I.; Belyashov, A. V.; Malakhova, M. N.
- Two chemical calibration explosions, conducted at the former Semipalatinsk nuclear test site in 1998 with charges of 25 tons and 100 tons TNT, have been used for developing travel-time curves and generalized one-dimensional velocity models of the crust and upper mantle of the platform region of Kazakhstan. The explosions were recorded by a number of digital seismic stations, located in Kazakhstan at distances ranging from 0 to 720km. The travel-time tables developed in this paper cover the phases P, Pn, Pg, S, Sn, Lg in a range of 0-740km and the velocity models apply to the crust down to 44km depth and to the mantle down to 120km. A comparison of the compiled travel-time tables with existing travel-time tables of CSE and IASPEI91 is presented.
NASA Astrophysics Data System (ADS)
Díaz, Daniel; Molina, Alejandro; Hahn, David
2018-07-01
The influence of laser irradiance and wavelength on the analysis of gold and silver in ore and surrogate samples with laser-induced breakdown spectroscopy (LIBS) was evaluated. Gold-doped mineral samples (surrogates) and ore samples containing naturally-occurring gold and silver were analyzed with LIBS using 1064 and 355 nm laser wavelengths at irradiances from 0.36 × 109 to 19.9 × 109 W/cm2 and 0.97 × 109 to 4.3 × 109 W/cm2, respectively. The LIBS net, background and signal-to-background signals were analyzed. For all irradiances, wavelengths, samples and analytes the calibration curves behaved linearly for concentrations from 1 to 9 μg/g gold (surrogate samples) and 0.7 to 47.0 μg/g silver (ore samples). However, it was not possible to prepare calibration curves for gold-bearing ore samples (at any concentration) nor for gold-doped surrogate samples with gold concentrations below 1 μg/g. Calibration curve parameters for gold-doped surrogate samples were statistically invariant at 1064 and 355 nm. Contrary, the Ag-ore analyte showed higher emission intensity at 1064 nm, but the signal-to-background normalization reduced the effect of laser wavelength of silver calibration plots. The gold-doped calibration curve metrics improved at higher laser irradiance, but that did not translate into lower limits of detection. While coefficients of determination (R2) and limits of detection did not vary significantly with laser wavelength, the LIBS repeatability at 355 nm improved up to a 50% with respect to that at 1064 nm. Plasma diagnostics by the Boltzmann and Stark broadening methods showed that the plasma temperature and electron density did not follow a specific trend as the wavelength changed for the delay and gate times used. This research presents supporting evidence that the LIBS discrete sampling features combined with the discrete and random distribution of gold in minerals hinder gold analysis by LIBS in ore samples; however, the use of higher laser irradiances at 1064 nm increased the probability of sampling and detecting naturally-occurring gold.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur
2016-05-01
In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.
Code of Federal Regulations, 2014 CFR
2014-07-01
... native AOI concentration (ppm) of the effluent during stable conditions. (14) Post-test calibration. At... or removal efficiencies must be determined while etching a substrate (product, dummy, or test). For... curves for the subsequent destruction or removal efficiency tests. (8) Mass location calibration. A...
Vosough, Maryam; Mohamedian, Hadi; Salemi, Amir; Baheri, Tahmineh
2015-02-01
In the present study, a simple strategy based on solid-phase extraction (SPE) with a cation exchange sorbent (Finisterre SCX) followed by fast high-performance liquid chromatography (HPLC) with diode array detection coupled with chemometrics tools has been proposed for the determination of methamphetamine and pseudoephedrine in ground water and river water. At first, the HPLC and SPE conditions were optimized and the analytical performance of the method was determined. In the case of ground water, determination of analytes was successfully performed through univariate calibration curves. For river water sample, multivariate curve resolution and alternating least squares was implemented and the second-order advantage was achieved in samples containing uncalibrated interferences and uncorrected background signals. The calibration curves showed good linearity (r(2) > 0.994).The limits of detection for pseudoephedrine and methamphetamine were 0.06 and 0.08 μg/L and the average recovery values were 104.7 and 102.3% in river water, respectively. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A microplate assay to measure classical and alternative complement activity.
Puissant-Lubrano, Bénédicte; Fortenfant, Françoise; Winterton, Peter; Blancher, Antoine
2017-05-01
We developed and validated a kinetic microplate hemolytic assay (HA) to quantify classical and alternative complement activity in a single dilution of human plasma or serum. The assay is based on monitoring hemolysis of sensitized sheep (or uncoated rabbit) red blood cells by means of a 96-well microplate reader. The activity of the calibrator was evaluated by reference to 200 healthy adults. The conversion of 50% hemolysis time into a percentage of activity was obtained using a calibration curve plotted daily. The linearity of the assay as well as interference (by hemolysis, bilrubinemia and lipemia) was assessed for classical pathway (CP). The within-day and the between-day precision was satisfactory regarding the performance of commercially available liposome immunoassay (LIA) and ELISA. Patients with hereditary or acquired complement deficiencies were detected (activity was measured <30%). We also provided a reference range obtained from 200 blood donors. The agreement of CP evaluated on samples from 48 patients was 94% with LIA and 87.5% with ELISA. The sensitivity of our assay was better than that of LIA, and the cost was lower than either LIA or ELISA. In addition, this assay was less time consuming than previously reported HAs. This assay allows the simultaneous measurement of 36 samples in duplicate per run of a 96-well plate. The use of a daily calibration curve allows standardization of the method and leads to good reproducibility. The same technique was also adapted for the quantification of alternative pathway (AP) activity.
External validation of a Cox prognostic model: principles and methods
2013-01-01
Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction required to achieve the water quality standard was estimated. The R(2) value for the calibrated BOD5 was 0.60, which is a moderate result, and the R(2) value for the TP was 0.86, which is a good result. The percent differences obtained for the calibrated BOD5 and TP were very good; therefore, the calibration results using WMCIG were satisfactory. From the load duration curve analysis, the WQS exceedance frequencies of the BOD5 under dry conditions and low-flow conditions were 75.7% and 65%, respectively, and the exceedance frequencies under moist and mid-range conditions were higher than under other conditions. The exceedance frequencies of the TP for the high-flow, moist and mid-range conditions were high and the exceedance rate for the high-flow condition was particularly high. Most of the data from the high-flow conditions exceeded the WQSs. Thus, nonpoint-source pollutants from storm-water runoff substantially affected the TP concentration in the Gomakwoncheon. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia
2016-09-01
In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining PCC of lymphocytes and CHO cells with FISH using PNA probes after 10 h and 24 h after irradiation, and, finally, calibration data of excess PCC fragments (Giemsa) to be used if human blood is available immediately after irradiation or within 24 h.
Direct Estimate of Cocoa Powder Content in Cakes by Colorimetry and Photoacoustic Spectroscopy
NASA Astrophysics Data System (ADS)
Dóka, O.; Bicanic, D.; Kulcsár, R.
2014-12-01
Cocoa is a very important ingredient in the food industry and largely consumed worldwide. In this investigation, colorimetry and photoacoustic spectroscopy were used to directly assess the content of cocoa powder in cakes; both methods provided satisfactory results. The calibration curve was constructed using a series of home-made cakes containing varying amount of cocoa powder. Then, at a later stage, the same calibration curve was used to quantify the cocoa content of several commercially available cakes. For self-made cakes, the relationship between the PAS signal and the content of cocoa powder was linear while a quadratic dependence was obtained for the colorimetric index (brightness) and total color difference ().
Antenna Calibration and Measurement Equipment
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Cortes, Manuel Vazquez
2012-01-01
A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.
Measurements of the sensitivity of radiochromic film using ion beams
NASA Astrophysics Data System (ADS)
Steidle, J. A.; Shortino, J. P.; Ellison, D. M.; Freeman, C. G.; Sangster, T. C.
2013-10-01
Radiochromic film (RCF) is used in several diagnostics as a dosimeter that chromatically responds to incident particles. This response depends on the fluence, energy, and species of the incident particles. A 1.7 MV tandem Pelletron accelerator is used to create a monoenergetic ion beam which is scattered off a thin gold target onto a strip of RCF. A surface barrier detector is positioned behind a small hole in the film to measure the ion fluence on the nearby film. Once the film develops, it is scanned to examine its optical density. A response curve is acquired by fitting a three parameter formula to optical density and dose. These calibration curves can be used to help determine incident doses in a variety of situations.
The TESS Science Processing Operations Center
NASA Technical Reports Server (NTRS)
Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd;
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).
Pilkonis, Paul A.; Choi, Seung W.; Reise, Steven P.; Stover, Angela M.; Riley, William T.; Cella, David
2011-01-01
The authors report on the development and calibration of item banks for depression, anxiety, and anger as part of the Patient-Reported Outcomes Measurement Information System (PROMIS®). Comprehensive literature searches yielded an initial bank of 1,404 items from 305 instruments. After qualitative item analysis (including focus groups and cognitive interviewing), 168 items (56 for each construct) were written in a first person, past tense format with a 7-day time frame and five response options reflecting frequency. The calibration sample included nearly 15,000 respondents. Final banks of 28, 29, and 29 items were calibrated for depression, anxiety, and anger, respectively, using item response theory. Test information curves showed that the PROMIS item banks provided more information than conventional measures in a range of severity from approximately −1 to +3 standard deviations (with higher scores indicating greater distress). Short forms consisting of seven to eight items provided information comparable to legacy measures containing more items. PMID:21697139
Pilkonis, Paul A; Choi, Seung W; Reise, Steven P; Stover, Angela M; Riley, William T; Cella, David
2011-09-01
The authors report on the development and calibration of item banks for depression, anxiety, and anger as part of the Patient-Reported Outcomes Measurement Information System (PROMIS®). Comprehensive literature searches yielded an initial bank of 1,404 items from 305 instruments. After qualitative item analysis (including focus groups and cognitive interviewing), 168 items (56 for each construct) were written in a first person, past tense format with a 7-day time frame and five response options reflecting frequency. The calibration sample included nearly 15,000 respondents. Final banks of 28, 29, and 29 items were calibrated for depression, anxiety, and anger, respectively, using item response theory. Test information curves showed that the PROMIS item banks provided more information than conventional measures in a range of severity from approximately -1 to +3 standard deviations (with higher scores indicating greater distress). Short forms consisting of seven to eight items provided information comparable to legacy measures containing more items.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S. N.; Revet, G.; Fuchs, J.
Radiochromic films (RCF) are commonly used in dosimetry for a wide range of radiation sources (electrons, protons, and photons) for medical, industrial, and scientific applications. They are multi-layered, which includes plastic substrate layers and sensitive layers that incorporate a radiation-sensitive dye. Quantitative dose can be retrieved by digitizing the film, provided that a prior calibration exists. Here, to calibrate the newly developed EBT3 and HDv2 RCFs from Gafchromic™, we used the Stanford Medical LINAC to deposit in the films various doses of 10 MeV photons, and by scanning the films using three independent EPSON Precision 2450 scanners, three independent EPSONmore » V750 scanners, and two independent EPSON 11000XL scanners. The films were scanned in separate RGB channels, as well as in black and white, and film orientation was varied. We found that the green channel of the RGB scan and the grayscale channel are in fact quite consistent over the different models of the scanner, although this comes at the cost of a reduction in sensitivity (by a factor ∼2.5 compared to the red channel). To allow any user to extend the absolute calibration reported here to any other scanner, we furthermore provide a calibration curve of the EPSON 2450 scanner based on absolutely calibrated, commercially available, optical density filters.« less
Granholm, Kim; Sokalski, Tomasz; Lewenstam, Andrzej; Ivaska, Ari
2015-08-12
A new method to convert the potential of an ion-selective electrode to concentration or activity in potentiometric titration is proposed. The advantage of this method is that the electrode standard potential and the slope of the calibration curve do not have to be known. Instead two activities on the titration curve have to be estimated e.g. the starting activity before the titration begins and the activity at the end of the titration in the presence of large excess of titrant. This new method is beneficial when the analyte is in a complexed matrix or in a harsh environment which affects the properties of the electrode and the traditional calibration procedure with standard solutions cannot be used. The new method was implemented both in a method of linearization based on the Grans's plot and in determination of the stability constant of a complex and the concentration of the complexing ligand in the sample. The new method gave accurate results when using titrations data from experiments with samples of known composition and with real industrial harsh black liquor sample. A complexometric titration model was also developed. Copyright © 2015 Elsevier B.V. All rights reserved.
Ivandini, Tribidasari A; Saepudin, Endang; Wardah, Habibah; Harmesa; Dewangga, Netra; Einaga, Yasuaki
2012-11-20
Gold-modified boron doped diamond (BDD) electrodes were examined for the amperometric detection of oxygen as well as a detector for measuring biochemical oxygen demand (BOD) using Rhodotorula mucilaginosa UICC Y-181. An optimum potential of -0.5 V (vs Ag/AgCl) was applied, and the optimum waiting time was observed to be 20 min. A linear calibration curve for oxygen reduction was achieved with a sensitivity of 1.4 μA mg(-1) L oxygen. Furthermore, a linear calibration curve in the glucose concentration range of 0.1-0.5 mM (equivalent to 10-50 mg L(-1) BOD) was obtained with an estimated detection limit of 4 mg L(-1) BOD. Excellent reproducibility of the BOD sensor was shown with an RSD of 0.9%. Moreover, the BOD sensor showed good tolerance against the presence of copper ions up to a maximum concentration of 0.80 μM (equivalent to 50 ppb). The sensor was applied to BOD measurements of the water from a lake at the University of Indonesia in Jakarta, Indonesia, with results comparable to those made using a standard method for BOD measurement.
Yamane, Naoe; Takami, Tomonori; Tozuka, Zenzaburo; Sugiyama, Yuichi; Yamazaki, Akira; Kumagai, Yuji
2009-01-01
A sample treatment procedure and high-sensitive liquid chromatography/tandem mass spectrometry (LC/MS/MS) method for quantitative determination of nicardipine in human plasma were developed for a microdose clinical trial with nicardipine, a non-radioisotope labeled drug. The calibration curve was linear in the range of 1-500 pg/mL using 1 mL of plasma. Analytical method validation for the clinical dose, for which the calibration curve was linear in the range of 0.2-100 ng/mL using 20 microL of plasma, was also conducted. Each method was successfully applied to making determinations in plasma using LC/MS/MS after administration of a microdose (100 microg) and clinical dose (20 mg) to each of six healthy volunteers. We tested new approaches in the search for metabolites in plasma after microdosing. In vitro metabolites of nicardipine were characterized using linear ion trap-fourier transform ion cyclotron resonance mass spectrometry (LIT-FTICRMS) and the nine metabolites predicted to be in plasma were analyzed using LC/MS/MS. There is a strong possibility that analysis of metabolites by LC/MS/MS may advance to utilization in microdose clinical trials with non-radioisotope labeled drugs.
Calibration of z-axis linearity for arbitrary optical topography measuring instruments
NASA Astrophysics Data System (ADS)
Eifler, Matthias; Seewig, Jörg; Hering, Julian; von Freymann, Georg
2015-05-01
The calibration of the height axis of optical topography measurement instruments is essential for reliable topography measurements. A state of the art technology for the calibration of the linearity and amplification of the z-axis is the use of step height artefacts. However, a proper calibration requires numerous step heights at different positions within the measurement range. The procedure is extensive and uses artificial surface structures that are not related to real measurement tasks. Concerning these limitations, approaches should to be developed that work for arbitrary topography measurement devices and require little effort. Hence, we propose calibration artefacts which are based on the 3D-Abbott-Curve and image desired surface characteristics. Further, real geometric structures are used as an initial point of the calibration artefact. Based on these considerations, an algorithm is introduced which transforms an arbitrary measured surface into a measurement artefact for the z-axis linearity. The method works both for profiles and topographies. For considering effects of manufacturing, measuring, and evaluation an iterative approach is chosen. The mathematical impact of these processes can be calculated with morphological signal processing. The artefact is manufactured with 3D laser lithography and characterized with different optical measurement devices. An introduced calibration routine can calibrate the entire z-axis-range within one measurement and minimizes the required effort. With the results it is possible to locate potential linearity deviations and to adjust the z-axis. Results of different optical measurement principles are compared in order to evaluate the capabilities of the new artefact.
NASA Astrophysics Data System (ADS)
Sun, Ke; Zhang, Wei; Ding, Huaping; Kim, Robin E.; Spencer, Billie F., Jr.
2016-10-01
The operation of subway trains induces ambient vibrations, which may cause annoyance and other adverse effects on humans, eventually leading to physical, physiological, and psychological problems. In this paper, the human annoyance rate (HAR) models, used to assess the human comfort under the subway train-induced ambient vibrations, were deduced and the calibration curves for 5 typical use circumstances were addressed. An autonomous measurement system, based on the Imote2, wireless smart sensor (WSS) platform, plus the SHM-H, high-sensitivity accelerometer board, was developed for the HAR assessment. The calibration curves were digitized and embedded in the computational core of the WSS unit. Experimental validation was conducted, using the developed system on a large underground reinforced concrete frame structure adjoining the subway station. The ambient acceleration of both basement floors was measured; the embedded computation was implemented and the HAR assessment results were wirelessly transmitted to the central server, all by the WSS unit. The HAR distributions of the testing areas were identified, and the extent to which both basements will be influenced by the close-up subway-train’s operation, in term of the 5 typical use circumstances, were quantitatively assessed. The potential of the WSS-based autonomous system for the fast environment impact assessment of the subway train-induced ambient vibration was well demonstrated.
Wei, Qiuning; Wei, Yuan; Liu, Fangfang; Ding, Yalei
2015-10-01
To investigate the method for uncertainty evaluation of determination of tin and its compounds in the air of workplace by flame atomic absorption spectrometry. The national occupational health standards, GBZ/T160.28-2004 and JJF1059-1999, were used to build a mathematical model of determination of tin and its compounds in the air of workplace and to calculate the components of uncertainty. In determination of tin and its compounds in the air of workplace using flame atomic absorption spectrometry, the uncertainty for the concentration of the standard solution, atomic absorption spectrophotometer, sample digestion, parallel determination, least square fitting of the calibration curve, and sample collection was 0.436%, 0.13%, 1.07%, 1.65%, 3.05%, and 2.89%, respectively. The combined uncertainty was 9.3%.The concentration of tin in the test sample was 0.132 mg/m³, and the expanded uncertainty for the measurement was 0.012 mg/m³ (K=2). The dominant uncertainty for determination of tin and its compounds in the air of workplace comes from least squares fitting of the calibration curve and sample collection. Quality control should be improved in the process of calibration curve fitting and sample collection.
Realization of the Gallium Triple Point at NMIJ/AIST
NASA Astrophysics Data System (ADS)
Nakano, T.; Tamura, O.; Sakurai, H.
2008-02-01
The triple point of gallium has been realized by a calorimetric method using capsule-type standard platinum resistance thermometers (CSPRTs) and a small glass cell containing about 97 mmol (6.8 g) of gallium with a nominal purity of 99.99999%. The melting curve shows a very flat and relatively linear dependence on 1/ F in the region from 1/ F = 1 to 1/ F = 20 with a narrow width of the melting curve within 0.1 mK. Also, a large gallium triple-point cell was fabricated for the calibration of client-owned CSPRTs. The gallium triple-point cell consists of a PTFE crucible and a PTFE cap with a re-entrant well and a small vent. The PTFE cell contains 780 g of gallium from the same source as used for the small glass cell. The PTFE cell is completely covered by a stainless-steel jacket with a valve to enable evacuation of the cell. The melting curve of the large cell shows a flat plateau that remains within 0.03 mK over 10 days and that is reproducible within 0.05 mK over 8 months. The calibrated value of a CSPRT obtained using the large cell agrees with that obtained using the small glass cell within the uncertainties of the calibrations.
van Werkhoven, C H; van der Tempel, J; Jajou, R; Thijsen, S F T; Diepersloot, R J A; Bonten, M J M; Postma, D F; Oosterheert, J J
2015-08-01
To develop and validate a prediction model for Clostridium difficile infection (CDI) in hospitalized patients treated with systemic antibiotics, we performed a case-cohort study in a tertiary (derivation) and secondary care hospital (validation). Cases had a positive Clostridium test and were treated with systemic antibiotics before suspicion of CDI. Controls were randomly selected from hospitalized patients treated with systemic antibiotics. Potential predictors were selected from the literature. Logistic regression was used to derive the model. Discrimination and calibration of the model were tested in internal and external validation. A total of 180 cases and 330 controls were included for derivation. Age >65 years, recent hospitalization, CDI history, malignancy, chronic renal failure, use of immunosuppressants, receipt of antibiotics before admission, nonsurgical admission, admission to the intensive care unit, gastric tube feeding, treatment with cephalosporins and presence of an underlying infection were independent predictors of CDI. The area under the receiver operating characteristic curve of the model in the derivation cohort was 0.84 (95% confidence interval 0.80-0.87), and was reduced to 0.81 after internal validation. In external validation, consisting of 97 cases and 417 controls, the model area under the curve was 0.81 (95% confidence interval 0.77-0.85) and model calibration was adequate (Brier score 0.004). A simplified risk score was derived. Using a cutoff of 7 points, the positive predictive value, sensitivity and specificity were 1.0%, 72% and 73%, respectively. In conclusion, a risk prediction model was developed and validated, with good discrimination and calibration, that can be used to target preventive interventions in patients with increased risk of CDI. Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Rapid planar chromatographic analysis of 25 water-soluble dyes used as food additives.
Morlock, Gertrud E; Oellig, Claudia
2009-01-01
A rapid planar chromatographic method for identification and quantification of 25 water-soluble dyes in food was developed. In a horizontal developing chamber, the chromatographic separation on silica gel 60F254 high-performance thin-layer chromatography plates took 12 min for 40 runs in parallel, using 8 mL ethyl acetate-methanol-water-acetic acid (65 + 23 + 11 + 1, v/v/v/v) mobile phase up to a migration distance of 50 mm. However, the total analysis time, inclusive of application and evaluation, took 60 min for 40 runs. Thus, the overall time/run can be calculated as 1.5 min with a solvent consumption of 200 microL. A sample throughput of 1000 runs/8 h day can be reached by switching between the working stations (application, development, and evaluation) in a 20 min interval, which triples the analysis throughput. Densitometry was performed by absorption measurement using the multiwavelength scan mode in the UV and visible ranges. Repeatabilities [relative standard deviation (RSD), 4 determinations] at the first or second calibration level showed precisions of mostly < or = 2.7%, ranging between 0.2 and 5.2%. Correlation coefficient values (R > or = 0.9987) and RSD values (< or = 4.2%) of the calibration curves were highly satisfactory using classical quantification. However, digital evaluation of the plate image was also used for quantification, which resulted in RSD values of the calibration curves of mostly < or = 3.0%, except for two < or = 6.0%. The method was applied for the analysis of some energy drinks and bakery ink formulations, directly applied after dilution. By recording of absorbance spectra in the visible range, the identities of the dyes found in the samples were ascertained by comparison with the respective standard bands (correlation coefficients > or = 0.9996). If necessary for confirmation, online mass spectra were recorded within a minute.
Wang, Guangji; Wang, Qian; Rao, Tai; Shen, Boyu; Kang, Dian; Shao, Yuhao; Xiao, Jingcheng; Chen, Huimin; Liang, Yan
2016-06-15
Pidotimod, (R)-3-[(S)-(5-oxo-2-pyrrolidinyl) carbonyl]-thiazolidine-4-carboxylic acid, was frequently used to treat children with recurrent respiratory infections. Preclinical pharmacokinetics of pidotimod was still rarely reported to date. Herein, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed and validated to determine pidotimod in rat plasma, tissue homogenate and Caco-2 cells. In this process, phenacetin was chosen as the internal standard due to its similarity in chromatographic and mass spectrographic characteristics with pidotimod. The plasma calibration curves were established within the concentration range of 0.01-10.00μg/mL, and similar linear curves were built using tissue homogenate and Caco-2 cells. The calibration curves for all biological samples showed good linearity (r>0.99) over the concentration ranges tested. The intra- and inter-day precision (RSD, %) values were below 15% and accuracy (RE, %) was ranged from -15% to 15% at all quality control levels. For plasma, tissue homogenate and Caco-2 cells, no obvious matrix effect was found, and the average recoveries were all above 75%. Thus, the method demonstrated excellent accuracy, precision and robustness for high throughput applications, and was then successfully applied to the studies of absorption in rat plasma, distribution in rat tissues and intracellular uptake characteristics in Caco-2 cells for pidotimod. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leinenweber, Kurt, E-mail: kurtl@asu.edu; Gullikson, Amber L.; Stoyanov, Emil
2015-09-15
The accuracy and precision of pressure measurements and the pursuit of reliable and readily available pressure scales at simultaneous high temperatures and pressures are still topics in development in high pressure research despite many years of work. In situ pressure scales based on x-ray diffraction are widely used but require x-ray access, which is lacking outside of x-ray beam lines. Other methods such as fixed points require several experiments to bracket a pressure calibration point. In this study, a recoverable high-temperature pressure gauge for pressures ranging from 3 GPa to 10 GPa is presented. The gauge is based on themore » pressure-dependent solubility of an SiO{sub 2} component in the rutile-structured phase of GeO{sub 2} (argutite), and is valid when the argutite solid solution coexists with coesite. The solid solution varies strongly in composition, mainly in pressure but also somewhat in temperature, and the compositional variations are easily detected by x-ray diffraction of the recovered products because of significant changes in the lattice parameters. The solid solution is measured here on two isotherms, one at 1200 °C and the other at 1500 °C, and is developed as a pressure gauge by calibrating it against three fixed points for each temperature and against the lattice parameter of MgO measured in situ at a total of three additional points. A somewhat detailed thermodynamic analysis is then presented that allows the pressure gauge to be used at other temperatures. This provides a way to accurately and reproducibly evaluate the pressure in high pressure experiments and applications in this pressure-temperature range, and could potentially be used as a benchmark to compare various other pressure scales under high temperature conditions. - Graphical abstract: The saturation curve of SiO{sub 2} in TiO{sub 2} shows a strong pressure dependence and a strong dependence of unit cell volume on composition. This provides an opportunity to use this saturation curve as a measurement of pressure during a high-pressure experiment. The curve is a sensitive measure of pressure from 3 GPa to 10 GPa at high temperatures. The pressure is derived from lattice parameter measurements on the recovered solid solution, meaning that in-situ measurements are not necessary to evaluate the pressure of the experiment. - Highlights: • The unit cell of a saturated GeO{sub 2}–SiO{sub 2} solid solution is used as a pressure sensor. • We measure nine bracketed pressure points on the GeO{sub 2}–SiO{sub 2} saturation surface. • We provide a pressure calibrant from 3 GPa to 10 GPa at two temperatures. • Four points are measured at 1200 °C and five points at 1500 °C. • A thermodynamic model is developed for use of the calibrant at other temperatures.« less
Quantification and Qualification of Bacteria Trapped in Chewed Gum
Wessel, Stefan W.; van der Mei, Henny C.; Morando, David; Slomp, Anje M.; van de Belt-Gritter, Betsy; Maitra, Amarnath; Busscher, Henk J.
2015-01-01
Chewing of gum contributes to the maintenance of oral health. Many oral diseases, including caries and periodontal disease, are caused by bacteria. However, it is unknown whether chewing of gum can remove bacteria from the oral cavity. Here, we hypothesize that chewing of gum can trap bacteria and remove them from the oral cavity. To test this hypothesis, we developed two methods to quantify numbers of bacteria trapped in chewed gum. In the first method, known numbers of bacteria were finger-chewed into gum and chewed gums were molded to standard dimensions, sonicated and plated to determine numbers of colony-forming-units incorporated, yielding calibration curves of colony-forming-units retrieved versus finger-chewed in. In a second method, calibration curves were created by finger-chewing known numbers of bacteria into gum and subsequently dissolving the gum in a mixture of chloroform and tris-ethylenediaminetetraacetic-acid (TE)-buffer. The TE-buffer was analyzed using quantitative Polymerase-Chain-Reaction (qPCR), yielding calibration curves of total numbers of bacteria versus finger-chewed in. Next, five volunteers were requested to chew gum up to 10 min after which numbers of colony-forming-units and total numbers of bacteria trapped in chewed gum were determined using the above methods. The qPCR method, involving both dead and live bacteria yielded higher numbers of retrieved bacteria than plating, involving only viable bacteria. Numbers of trapped bacteria were maximal during initial chewing after which a slow decrease over time up to 10 min was observed. Around 108 bacteria were detected per gum piece depending on the method and gum considered. The number of species trapped in chewed gum increased with chewing time. Trapped bacteria were clearly visualized in chewed gum using scanning-electron-microscopy. Summarizing, using novel methods to quantify and qualify oral bacteria trapped in chewed gum, the hypothesis is confirmed that chewing of gum can trap and remove bacteria from the oral cavity. PMID:25602256
Chen, Ling; Luo, Dan; Yu, Xiajuan; Jin, Mei; Cai, Wenzhi
2018-05-12
The aim of this study was to develop and validate a predictive tool that combining pelvic floor ultrasound parameters and clinical factors for stress urinary incontinence during pregnancy. A total of 535 women in first or second trimester were included for an interview and transperineal ultrasound assessment from two hospitals. Imaging data sets were analyzed offline to assess for bladder neck vertical position, urethra angles (α, β, and γ angles), hiatal area and bladder neck funneling. All significant continuous variables at univariable analysis were analyzed by receiver-operating characteristics. Three multivariable logistic models were built on clinical factor, and combined with ultrasound parameters. The final predictive model with best performance and fewest variables was selected to establish a nomogram. Internal and external validation of the nomogram were performed by both discrimination represented by C-index and calibration measured by Hosmer-Lemeshow test. A decision curve analysis was conducted to determine the clinical utility of the nomogram. After excluding 14 women with invalid data, 521 women were analyzed. β angle, γ angle and hiatal area had limited predictive value for stress urinary incontinence during pregnancy, with area under curves of 0.558-0.648. The final predictive model included body mass index gain since pregnancy, constipation, previous delivery mode, β angle at rest, and bladder neck funneling. The nomogram based on the final model showed good discrimination with a C-index of 0.789 and satisfactory calibration (P=0.828), both of which were supported by external validation. Decision curve analysis showed that the nomogram was clinical useful. The nomogram incorporating both the pelvic floor ultrasound parameters and clinical factors has been validated to show good discrimination and calibration, and could be an important tool for stress urinary incontinence risk prediction at an early stage of pregnancy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Development of a High Strength Isothermally Heat-Treated Nodular Iron Road Wheel Arm
1985-03-31
capacity load cell was calibrated using a Satec Universal Test System and a Hewlett-Packard X,Y Plotter to record the calibrated curve. The load cell...12e 1,3- 0 s0 3 I I r~ I I Il.1660 119 13. 30 1 3,1ý4-514 1 1 o36 1~~ 123 1, d 51 7 4~ 14 ~ 3.• 3I 2k i 7, G 0 se, y 2 es I Q.~ 141/ 14( 13.0130 1B14...LOT BAR QCH YIELD TESS. ELON(. Rc Rc 0HNCARPY LENGTH CaVNT. Nio. No. TIME .000 1O..O % ýMa-crol~licrd IFt Lb INCH~ ES IOU 2-77 •,3~-341.4___ 1 79 7,o
On the Long-Term Stability of Microwave Radiometers Using Noise Diodes for Calibration
NASA Technical Reports Server (NTRS)
Brown, Shannon T.; Desai, Shailen; Lu, Wenwen; Tanner, Alan B.
2007-01-01
Results are presented from the long-term monitoring and calibration of the National Aeronautics and Space Administration Jason Microwave Radiometer (JMR) on the Jason-1 ocean altimetry satellite and the ground-based Advanced Water Vapor Radiometers (AWVRs) developed for the Cassini Gravity Wave Experiment. Both radiometers retrieve the wet tropospheric path delay (PD) of the atmosphere and use internal noise diodes (NDs) for gain calibration. The JMR is the first radiometer to be flown in space that uses NDs for calibration. External calibration techniques are used to derive a time series of ND brightness for both instruments that is greater than four years. For the JMR, an optimal estimator is used to find the set of calibration coefficients that minimize the root-mean-square difference between the JMR brightness temperatures and the on-Earth hot and cold references. For the AWVR, continuous tip curves are used to derive the ND brightness. For the JMR and AWVR, both of which contain three redundant NDs per channel, it was observed that some NDs were very stable, whereas others experienced jumps and drifts in their effective brightness. Over the four-year time period, the ND stability ranged from 0.2% to 3% among the diodes for both instruments. The presented recalibration methodology demonstrates that long-term calibration stability can be achieved with frequent recalibration of the diodes using external calibration techniques. The JMR PD drift compared to ground truth over the four years since the launch was reduced from 3.9 to - 0.01 mm/year with the recalibrated ND time series. The JMR brightness temperature calibration stability is estimated to be 0.25 K over ten days.
NASA Astrophysics Data System (ADS)
Krüger, Magnus; Huang, Mao-Dong; Becker-Roß, Helmut; Florek, Stefan; Ott, Ingo; Gust, Ronald
The development of high-resolution continuum source molecular absorption spectrometry made the quantification of fluorine feasible by measuring the molecular absorption as gallium monofluoride (GaF). Using this new technique, we developed on the example of 5-fluorouracil (5-FU) a graphite furnace method to quantify fluorine in organic molecules. The effect of 5-FU on the generation of the diatomic GaF molecule was investigated. The experimental conditions such as gallium nitrate amount, temperature program, interfering anions (represented as corresponding acids) and calibration for the determination of 5-FU in standard solution and in cellular matrix samples were investigated and optimized. The sample matrix showed no effect on the sensitivity of GaF molecular absorption. A simple calibration curve using an inorganic sodium fluoride solution can conveniently be used for the calibration. The described method is sensitive and the achievable limit of detection is 0.23 ng of 5-FU. In order to establish the concept of "fluorine as a probe in medicinal chemistry" an exemplary application was selected, in which the developed method was successfully demonstrated by performing cellular uptake studies of the 5-FU in human colon carcinoma cells.
The purpose of this SOP is to describe procedures for preparing calibration curve solutions used for gas chromatography/mass spectrometry (GC/MS) analysis of chlorpyrifos, diazinon, malathion, DDT, DDE, DDD, a-chlordane, and g-chlordane in dust, soil, air, and handwipe sample ext...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Herrick, A; Hoke, S
Purpose: A new readout technology based on pulsed optically stimulating luminescence is introduced (microSTARii, Landauer, Inc, Glenwood, IL60425). This investigation searches for approaches that maximizes the dosimetry accuracy in clinical applications. Methods: The sensitivity of each optically stimulated luminescence dosimeter (OSLD) was initially characterized by exposing it to a given radiation beam. After readout, the luminescence signal stored in the OSLD was erased by exposing its sensing area to a 21W white LED light for 24 hours. A set of OSLDs with consistent sensitivities was selected to calibrate the dose reader. Higher order nonlinear curves were also derived from themore » calibration readings. OSLDs with cumulative doses below 15 Gy were reused. Before an in-vivo dosimetry, the OSLD luminescence signal was erased with the white LED light. Results: For a set of 68 manufacturer-screened OSLDs, the measured sensitivities vary in a range of 17.3%. A sub-set of the OSLDs with sensitivities within ±1% was selected for the reader calibration. Three OSLDs in a group were exposed to a given radiation. Nine groups were exposed to radiation doses ranging from 0 to 13 Gy. Additional verifications demonstrated that the reader uncertainty is about 3%. With an external calibration function derived by fitting the OSLD readings to a 3rd-order polynomial, the dosimetry uncertainty dropped to 0.5%. The dose-luminescence response curves of individual OSLDs were characterized. All curves converge within 1% after the sensitivity correction. With all uncertainties considered, the systematic uncertainty is about 2%. Additional tests emulating in-vivo dosimetry by exposing the OSLDs under different radiation sources confirmed the claim. Conclusion: The sensitivity of individual OSLD should be characterized initially. A 3rd-order polynomial function is a more accurate representation of the dose-luminescence response curve. The dosimetry uncertainty specified by the manufacturer is 4%. Following the proposed approach, it can be controlled to 2%.« less
A comparison of the Injury Severity Score and the Trauma Mortality Prediction Model.
Cook, Alan; Weddle, Jo; Baker, Susan; Hosmer, David; Glance, Laurent; Friedman, Lee; Osler, Turner
2014-01-01
Performance benchmarking requires accurate measurement of injury severity. Despite its shortcomings, the Injury Severity Score (ISS) remains the industry standard 40 years after its creation. A new severity measure, the Trauma Mortality Prediction Model (TMPM), uses either the Abbreviated Injury Scale (AIS) or DRG International Classification of Diseases-9th Rev. (ICD-9) lexicons and may better quantify injury severity compared with ISS. We compared the performance of TMPM with ISS and other measures of injury severity in a single cohort of patients. We included 337,359 patient records with injuries reliably described in both the AIS and the ICD-9 lexicons from the National Trauma Data Bank. Five injury severity measures (ISS, maximum AIS score, New Injury Severity Score [NISS], ICD-9-Based Injury Severity Score [ICISS], TMPM) were computed using either the AIS or ICD-9 codes. These measures were compared for discrimination (area under the receiver operating characteristic curve), an estimate of proximity to a model that perfectly predicts the outcome (Akaike information criterion), and model calibration curves. TMPM demonstrated superior receiver operating characteristic curve, Akaike information criterion, and calibration using either the AIS or ICD-9 lexicons. Calibration plots demonstrate the monotonic characteristics of the TMPM models contrasted by the nonmonotonic features of the other prediction models. Severity measures were more accurate with the AIS lexicon rather than ICD-9. NISS proved superior to ISS in either lexicon. Since NISS is simpler to compute, it should replace ISS when a quick estimate of injury severity is required for AIS-coded injuries. Calibration curves suggest that the nonmonotonic nature of ISS may undermine its performance. TMPM demonstrated superior overall mortality prediction compared with all other models including ISS whether the AIS or ICD-9 lexicons were used. Because TMPM provides an absolute probability of death, it may allow clinicians to communicate more precisely with one another and with patients and families. Disagnostic study, level I; prognostic study, level II.
McJimpsey, Erica L
2016-02-25
The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.
NASA Astrophysics Data System (ADS)
McJimpsey, Erica L.
2016-02-01
The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.
MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z
2018-03-01
In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Zhu, Lin; Ruan, Jian-Qing; Li, Na; Fu, Peter P; Ye, Yang; Lin, Ge
2016-03-01
Nearly 50% of naturally-occurring pyrrolizidine alkaloids (PAs) are hepatotoxic, and the majority of hepatotoxic PAs are retronecine-type PAs (RET-PAs). However, quantitative measurement of PAs in herbs/foodstuffs is often difficult because most of reference PAs are unavailable. In this study, a rapid, selective, and sensitive UHPLC-QTOF-MS method was developed for the estimation of RET-PAs in herbs without requiring corresponding standards. This method is based on our previously established characteristic and diagnostic mass fragmentation patterns and the use of retrorsine for calibration. The use of a single RET-PA (i.e. retrorsine) for construction of calibration was based on high similarities with no significant differences demonstrated by the calibration curves constructed by peak areas of extract ion chromatograms of fragment ion at m/z 120.0813 or 138.0919 versus concentrations of five representative RET-PAs. The developed method was successfully applied to measure a total content of toxic RET-PAs of diversified structures in fifteen potential PA-containing herbs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano
2017-05-10
MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration.
Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano
2017-01-01
MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration. PMID:28489056
A Comparison of Radiometric Calibration Techniques for Lunar Impact Flashes
NASA Technical Reports Server (NTRS)
Suggs, R.
2016-01-01
Video observations of lunar impact flashes have been made by a number of researchers since the late 1990's and the problem of determination of the impact energies has been approached in different ways (Bellot Rubio, et al., 2000 [1], Bouley, et al., 2012.[2], Suggs, et al. 2014 [3], Rembold and Ryan 2015 [4], Ortiz, et al. 2015 [5]). The wide spectral response of the unfiltered video cameras in use for all published measurements necessitates color correction for the standard filter magnitudes available for the comparison stars. An estimate of the color of the impact flash is also needed to correct it to the chosen passband. Magnitudes corrected to standard filters are then used to determine the luminous energy in the filter passband according to the stellar atmosphere calibrations of Bessell et al., 1998 [6]. Figure 1 illustrates the problem. The camera pass band is the wide black curve and the blue, green, red, and magenta curves show the band passes of the Johnson-Cousins B, V, R, and I filters for which we have calibration star magnitudes. The blackbody curve of an impact flash of temperature 2800K (Nemtchinov, et al., 1998 [7]) is the dashed line. This paper compares the various photometric calibration techniques and how they address the color corrections necessary for the calculation of luminous energy (radiometry) of impact flashes. This issue has significant implications for determination of luminous efficiency, predictions of impact crater sizes for observed flashes, and the flux of meteoroids in the 10s of grams to kilogram size range.
Calibration Uncertainties in the Droplet Measurement Technologies Cloud Condensation Nuclei Counter
NASA Astrophysics Data System (ADS)
Hibert, Kurt James
Cloud condensation nuclei (CCN) serve as the nucleation sites for the condensation of water vapor in Earth's atmosphere and are important for their effect on climate and weather. The influence of CCN on cloud radiative properties (aerosol indirect effect) is the most uncertain of quantified radiative forcing changes that have occurred since pre-industrial times. CCN influence the weather because intrinsic and extrinsic aerosol properties affect cloud formation and precipitation development. To quantify these effects, it is necessary to accurately measure CCN, which requires accurate calibrations using a consistent methodology. Furthermore, the calibration uncertainties are required to compare measurements from different field projects. CCN uncertainties also aid the integration of CCN measurements with atmospheric models. The commercially available Droplet Measurement Technologies (DMT) CCN Counter is used by many research groups, so it is important to quantify its calibration uncertainty. Uncertainties in the calibration of the DMT CCN counter exist in the flow rate and supersaturation values. The concentration depends on the accuracy of the flow rate calibration, which does not have a large (4.3 %) uncertainty. The supersaturation depends on chamber pressure, temperature, and flow rate. The supersaturation calibration is a complex process since the chamber's supersaturation must be inferred from a temperature difference measurement. Additionally, calibration errors can result from the Kohler theory assumptions, fitting methods utilized, the influence of multiply-charged particles, and calibration points used. In order to determine the calibration uncertainties and the pressure dependence of the supersaturation calibration, three calibrations are done at each pressure level: 700, 840, and 980 hPa. Typically 700 hPa is the pressure used for aircraft measurements in the boundary layer, 840 hPa is the calibration pressure at DMT in Boulder, CO, and 980 hPa is the average surface pressure at Grand Forks, ND. The supersaturation calibration uncertainty is 2.3, 3.1, and 4.4 % for calibrations done at 700, 840, and 980 hPa respectively. The supersaturation calibration change with pressure is on average 0.047 % supersaturation per 100 hPa. The supersaturation calibrations done at UND are 42-45 % lower than supersaturation calibrations done at DMT approximately 1 year previously. Performance checks confirmed that all major leaks developed during shipping were fixed before conducting the supersaturation calibrations. Multiply-charged particles passing through the Electrostatic Classifier may have influenced DMT's activation curves, which is likely part of the supersaturation calibration difference. Furthermore, the fitting method used to calculate the activation size and the limited calibration points are likely significant sources of error in DMT's supersaturation calibration. While the DMT CCN counter's calibration uncertainties are relatively small, and the pressure dependence is easily accounted for, the calibration methodology used by different groups can be very important. The insights gained from the careful calibration of the DMT CCN counter indicate that calibration of scientific instruments using complex methodology is not trivial.
NASA Astrophysics Data System (ADS)
Navarro, Jorge
The goal of this study presented is to determine the best available nondestructive technique necessary to collect validation data as well as to determine burnup and cooling time of the fuel elements on-site at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent to the reactor. Once it was establish that useful spectra can be obtained at the ATR canal, the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements nondestructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed were used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results, it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however, in order to enhance the quality of the spectra collected using this scintillator, a deconvolution method was developed. Following the development of the deconvolution method for ATR applications, the technique was tested using one-isotope, multi-isotope, and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr 3 detector in an above the water configuration and deconvolution algorithms.
Bayesian inference of Calibration curves: application to archaeomagnetism
NASA Astrophysics Data System (ADS)
Lanos, P.
2003-04-01
The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.
Demonstration of KHILS two-color IR projection capability
NASA Astrophysics Data System (ADS)
Jones, Lawrence E.; Coker, Jason S.; Garbo, Dennis L.; Olson, Eric M.; Murrer, Robert Lee, Jr.; Bergin, Thomas P.; Goldsmith, George C., II; Crow, Dennis R.; Guertin, Andrew W.; Dougherty, Michael; Marler, Thomas M.; Timms, Virgil G.
1998-07-01
For more than a decade, there has been considerable discussion about using different IR bands for the detection of low contrast military targets. Theory predicts that a target can have little to no contrast against the background in one IR band while having a discernible signature in another IR band. A significant amount of effort has been invested towards establishing hardware that is capable of simultaneously imaging in two IR bands to take advantage of this phenomenon. Focal plane arrays (FPA) are starting to materialize with this simultaneous two-color imaging capability. The Kinetic Kill Vehicle Hardware-in-the-loop Simulator (KHILS) team of the Air Force Research Laboratory and the Guided Weapons Evaluation Facility (GWEF), both at Eglin AFB, FL, have spent the last 10 years developing the ability to project dynamic IR scenes to imaging IR seekers. Through the Wideband Infrared Scene Projector (WISP) program, the capability to project two simultaneous IR scenes to a dual color seeker has been established at KHILS. WISP utilizes resistor arrays to produce the IR energy. Resistor arrays are not ideal blackbodies. The projection of two IR colors with resistor arrays, therefore, requires two optically coupled arrays. This paper documents the first demonstration of two-color simultaneous projection at KHILS. Agema cameras were used for the measurements. The Agema's HgCdTe detector has responsivity from 4 to 14 microns. A blackbody and two IR filters (MWIR equals 4.2 t 7.4 microns, LWIR equals 7.7 to 13 microns) were used to calibrate the Agema in two bands. Each filter was placed in front of the blackbody one at a time, and the temperature of the blackbody was stepped up in incremental amounts. The output counts from the Agema were recorded at each temperature. This calibration process established the radiance to Agema output count curves for the two bands. The WISP optical system utilizes a dichroic beam combiner to optically couple the two resistor arrays. The transmission path of the beam combiner provided the LWIR (6.75 to 12 microns), while the reflective path produced the MWIR (3 to 6.5 microns). Each resistor array was individually projected into the Agema through the beam combiner at incremental output levels. Once again the Agema's output counts were recorded at each resistor array output level. These projections established the resistor array output to Agema count curves for the MWIR and LWIR resistor arrays. Using the radiance to Agema counts curves, the MWIR and LWIR resistor array output to radiance curves were established. With the calibration curves established, a two-color movie was projected and compared to the generated movie radiance values. By taking care to correctly account for the spectral qualities of the Agema camera, the calibration filters, and the diachroic beam combiner, the projections matched the theoretical calculations. In the near future, a Lockheed- Martin Multiple Quantum Well camera with true two-color IR capability will be tested.
On the absolute calibration of SO2 cameras
Lübcke, Peter; Bobrowski, Nicole; Illing, Sebastian; Kern, Christoph; Alvarez Nieves, Jose Manuel; Vogel, Leif; Zielcke, Johannes; Delgados Granados, Hugo; Platt, Ulrich
2013-01-01
This work investigates the uncertainty of results gained through the two commonly used, but quite different, calibration methods (DOAS and calibration cells). Measurements with three different instruments, an SO2 camera, a NFOVDOAS system and an Imaging DOAS (I-DOAS), are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The respective results are compared with measurements from an I-DOAS to verify the calibration curve over the spatial extent of the image. The results show that calibration cells, while working fine in some cases, can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. The measurements presented in this work were taken at Popocatepetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 and 14.34 kg s−1 were observed.
2016-06-01
Total Package Procurement (TPP) when it was judged to be practicable and, when not, Fixed Price Incentive Fee (FPIF) or Cost Plus Incentive Fee (CPIF...development contracts in favor of CPIF. ( Cost Plus Award Fee may not have been included in the contracting play book yet.) As a general matter, Packard’s...Group CAPE Cost Assessment and Program Evaluation CD Compact Disc CE Current Estimate CLC Calibrated Learning Curve CPIF Cost Plus Incentive Fee
Hotta, Hiroki; Miki, Yuko; Kawaguchi, Yukiko; Tsunoda, Kin-Ichi; Nakaoka, Atsuko; Ko, Sho; Kimoto, Takashi
2017-01-01
Infrared waveguide spectroscopy using a sapphire rod coated with an amorphous fluoropolymer (Cytop, Asahi Glass Co., ltd, Japan) has been developed in order to directly observe CO 2 in aqueous solutions. Since the amorphous fluoropolymer has a relatively high gas-permeability and hydrophobic feature, the aqueous CO 2 transmits into the amorphous fluoropolymer coating film, but water cannot penetrate into the film. Good linearity of calibration curves for CO 2 in the gas and the aqueous solution were obtained.
Data user's notes of the radio astronomy experiment aboard the OGO-V spacecraft
NASA Technical Reports Server (NTRS)
Haddock, F. T.; Breckenridge, S. L.
1970-01-01
General information concerning the low-frequency radiometer, instrument package launching and operation, and scientific objectives of the flight are provided. Calibration curves and correction factors, with general and detailed information on the preflight calibration procedure are included. The data acquisition methods and the format of the data reduction, both on 35 mm film and on incremental computer plots, are described.
VOC identification and inter-comparison from laboratory biomass burning using PTR-MS and PIT-MS
C. Warneke; J. M. Roberts; P. Veres; J. Gilman; W. C. Kuster; I. Burling; R. Yokelson; J. A. de Gouw
2011-01-01
Volatile organic compounds (VOCs) emitted from fires of biomass commonly found in the southeast and southwest U.S. were investigated with PTR-MS and PIT-MS, which are capable of fast measurements of a large number of VOCs. Both instruments were calibrated with gas standards and mass dependent calibration curves are determined. The sensitivity of the PIT-MS linearly...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnell, E; Ferreira, C; Ahmad, S
Purpose: Accuracy of a RSP-HU calibration curve produced for proton treatment planning is tested by comparing the treatment planning system dose grid to physical doses delivered on film by a Mevion S250 double-scattering proton unit. Methods: A single batch of EBT3 Gafchromic film was used for calibration and measurements. The film calibration curve was obtained using Mevion proton beam reference option 20 (15cm range, 10cm modulation). Paired films were positioned at the center of the spread out Bragg peak (SOBP) in solid water. The calibration doses were verified with an ion chamber, including background and doses from 20cGy to 350cGy.more » Films were scanned in a flatbed Epson-Expression 10000-XL scanner, and analyzed using the red channel. A Rando phantom was scanned with a GE LightSpeed CT Simulator. A single-field proton plan (Eclipse, Varian) was calculated to deliver 171cGy to the pelvis section (heterogeneous region), using a standard 4×4cm aperture without compensator, 7.89cm beam range, and 5.36cm SOBP. Varied depths of the calculated distal 90% isodose-line were recorded and compared. The dose distribution from film irradiated between Rando slices was compared with the calculated plans using RIT v.6.2. Results: Distal 90% isodose-line depth variation between CT scans was 2mm on average, and 4mm at maximum. Fine calculation of this variation was restricted by the dose calculation grid, as well as the slice thickness. Dose differences between calibrated film measurements and calculated doses were on average 5.93cGy (3.5%), with the large majority of differences forming a normal distribution around 3.5cGy (2%). Calculated doses were almost entirely greater than those measured. Conclusion: RSP to HU calibration curve is shown to produce distal depth variation within the margin of tolerance (±4.3mm) across all potential scan energies and protocols. Dose distribution calculation is accurate to 2–4% within the SOBP, including areas of high tissue heterogeneity.« less
Mello, Vinicius M; Oliveira, Flavia C C; Fraga, William G; do Nascimento, Claudia J; Suarez, Paulo A Z
2008-11-01
Three different calibration curves based on (1)H-NMR spectroscopy (300 MHz) were used for quantifying the reaction yield during biodiesel synthesis by esterification of fatty acids mixtures and methanol. For this purpose, the integrated intensities of the hydrogens of the ester methoxy group (3.67 ppm) were correlated with the areas related to the various protons of the alkyl chain (olefinic hydrogens: 5.30-5.46 ppm; aliphatic: 2.67-2.78 ppm, 2.30 ppm, 1.96-2.12 ppm, 1.56-1.68 ppm, 1.22-1.42 ppm, 0.98 ppm, and 0.84-0.92 ppm). The first curve was obtained using the peaks relating the olefinic hydrogens, a second with the parafinic protons and the third curve using the integrated intensities of all the hydrogens. A total of 35 samples were examined: 25 samples to build the three different calibration curves and ten samples to serve as external validation samples. The results showed no statistical differences among the three methods, and all presented prediction errors less than 2.45% with a co-efficient of variation (CV) of 4.66%. 2008 John Wiley & Sons, Ltd.
Fuss, Martina; Sturtewagen, Eva; De Wagter, Carlos; Georg, Dietmar
2007-07-21
The suitability of radiochromic EBT film was studied for high-precision clinical quality assurance (QA) by identifying the dose response for a wide range of irradiation parameters typically modified in highly-conformal treatment techniques. In addition, uncertainties associated with varying irradiation conditions were determined. EBT can be used for dose assessment of absorbed dose levels as well as relative dosimetry when compared to absolute absorbed dose calibrated using ionization chamber results. For comparison, a silver halide film (Kodak EDR-2) representing the current standard in film dosimetry was included. As an initial step a measurement protocol yielding accurate and precise results was established for a flatbed transparency scanner (Epson Expression 1680 Pro) that was utilized as a film reading instrument. The light transmission measured by the scanner was found to depend on the position of the film on the scanner plate. For three film pieces irradiated with doses of 0 Gy, approximately 1 Gy and approximately 7 Gy, the pixel values measured in portrait or landscape mode differed by 4.7%, 6.2% and 10.0%, respectively. A study of 200 film pieces revealed an excellent sheet-to-sheet uniformity. On a long time scale, the optical development of irradiated EBT film consisted of a slow but steady increase of absorbance which was not observed to cease during 4 months. Sensitometric curves of EBT films obtained under reference conditions (SSD = 95 cm, FS = 5 x 5 cm(2), d = 5 cm) for 6, 10 and 25 MV photon beams did not show any energy dependence. The average separation between all curves was only 0.7%. The variation of the depth d (range 2-25 cm) in the phantom did not affect the dose response of EBT film. Also the influence of the radiation field size (range 3 x 3-40 x 40 cm(2)) on the sensitometric curve was not significant. For EDR-2 films maximum differences between the calibration curves reached 7-8% for X6MV and X25MV. Radiochromic EBT film, in combination with a flatbed scanner, presents a versatile system for high-precision dosimetry in two dimensions, provided that the intrinsic behaviour of the film reading device is taken into account. EBT film itself presents substantial improvements on formerly available models of radiographic and a radiochromic film and its dosimetric characteristics allow us to measure absorbed dose levels in a large variety of situations with a single calibration curve.
NASA Astrophysics Data System (ADS)
Fuss, Martina; Sturtewagen, Eva; DeWagter, Carlos; Georg, Dietmar
2007-07-01
The suitability of radiochromic EBT film was studied for high-precision clinical quality assurance (QA) by identifying the dose response for a wide range of irradiation parameters typically modified in highly-conformal treatment techniques. In addition, uncertainties associated with varying irradiation conditions were determined. EBT can be used for dose assessment of absorbed dose levels as well as relative dosimetry when compared to absolute absorbed dose calibrated using ionization chamber results. For comparison, a silver halide film (Kodak EDR-2) representing the current standard in film dosimetry was included. As an initial step a measurement protocol yielding accurate and precise results was established for a flatbed transparency scanner (Epson Expression 1680 Pro) that was utilized as a film reading instrument. The light transmission measured by the scanner was found to depend on the position of the film on the scanner plate. For three film pieces irradiated with doses of 0 Gy, ~1 Gy and ~7 Gy, the pixel values measured in portrait or landscape mode differed by 4.7%, 6.2% and 10.0%, respectively. A study of 200 film pieces revealed an excellent sheet-to-sheet uniformity. On a long time scale, the optical development of irradiated EBT film consisted of a slow but steady increase of absorbance which was not observed to cease during 4 months. Sensitometric curves of EBT films obtained under reference conditions (SSD = 95 cm, FS = 5 × 5 cm2, d = 5 cm) for 6, 10 and 25 MV photon beams did not show any energy dependence. The average separation between all curves was only 0.7%. The variation of the depth d (range 2-25 cm) in the phantom did not affect the dose response of EBT film. Also the influence of the radiation field size (range 3 × 3-40 × 40 cm2) on the sensitometric curve was not significant. For EDR-2 films maximum differences between the calibration curves reached 7-8% for X6MV and X25MV. Radiochromic EBT film, in combination with a flatbed scanner, presents a versatile system for high-precision dosimetry in two dimensions, provided that the intrinsic behaviour of the film reading device is taken into account. EBT film itself presents substantial improvements on formerly available models of radiographic and a radiochromic film and its dosimetric characteristics allow us to measure absorbed dose levels in a large variety of situations with a single calibration curve.
Honeybul, Stephen; Ho, Kwok M
2016-09-01
Predicting long-term neurological outcomes after severe traumatic brain (TBI) is important, but which prognostic model in the context of decompressive craniectomy has the best performance remains uncertain. This prospective observational cohort study included all patients who had severe TBI requiring decompressive craniectomy between 2004 and 2014, in the two neurosurgical centres in Perth, Western Australia. Severe disability, vegetative state, or death were defined as unfavourable neurological outcomes. Area under the receiver-operating-characteristic curve (AUROC) and slope and intercept of the calibration curve were used to assess discrimination and calibration of the CRASH (Corticosteroid-Randomisation-After-Significant-Head injury) and IMPACT (International-Mission-For-Prognosis-And-Clinical-Trial) models, respectively. Of the 319 patients included in the study, 119 (37%) had unfavourable neurological outcomes at 18-month after decompressive craniectomy for severe TBI. Both CRASH (AUROC 0.86, 95% confidence interval 0.81-0.90) and IMPACT full-model (AUROC 0.85, 95% CI 0.80-0.89) were similar in discriminating between favourable and unfavourable neurological outcome at 18-month after surgery (p=0.690 for the difference in AUROC derived from the two models). Although both models tended to over-predict the risks of long-term unfavourable outcome, the IMPACT model had a slightly better calibration than the CRASH model (intercept of the calibration curve=-4.1 vs. -5.7, and log likelihoods -159 vs. -360, respectively), especially when the predicted risks of unfavourable outcome were <80%. Both CRASH and IMPACT prognostic models were good in discriminating between favourable and unfavourable long-term neurological outcome for patients with severe TBI requiring decompressive craniectomy, but the calibration of the IMPACT full-model was better than the CRASH model. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
SU-E-T-223: Computed Radiography Dose Measurements of External Radiotherapy Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aberle, C; Kapsch, R
2015-06-15
Purpose: To obtain quantitative, two-dimensional dose measurements of external radiotherapy beams with a computed radiography (CR) system and to derive volume correction factors for ionization chambers in small fields. Methods: A commercial Kodak ACR2000i CR system with Kodak Flexible Phosphor Screen HR storage foils was used. Suitable measurement conditions and procedures were established. Several corrections were derived, including image fading, length-scale corrections and long-term stability corrections. Dose calibration curves were obtained for cobalt, 4 MV, 8 MV and 25 MV photons, and for 10 MeV, 15 MeV and 18 MeV electrons in a water phantom. Inherent measurement inhomogeneities were studiedmore » as well as directional dependence of the response. Finally, 2D scans with ionization chambers were directly compared to CR measurements, and volume correction factors were derived. Results: Dose calibration curves (0.01 Gy to 7 Gy) were obtained for multiple photon and electron beam qualities. For each beam quality, the calibration curves can be described by a single fit equation over the whole dose range. The energy dependence of the dose response was determined. The length scale on the images was adjusted scan-by-scan, typically by 2 percent horizontally and by 3 percent vertically. The remaining inhomogeneities after the system’s standard calibration procedure were corrected for. After correction, the homogeneity is on the order of a few percent. The storage foils can be rotated by up to 30 degrees without a significant effect on the measured signal. First results on the determination of volume correction factors were obtained. Conclusion: With CR, quantitative, two-dimensional dose measurements with a high spatial resolution (sub-mm) can be obtained over a large dose range. In order to make use of these advantages, several calibrations, corrections and supporting measurements are needed. This work was funded by the European Metrology Research Programme (EMRP) project HLT09 MetrExtRT Metrology for Radiotherapy using Complex Radiation Fields.« less
Influence of LCD color reproduction accuracy on observer performance using virtual pathology slides
NASA Astrophysics Data System (ADS)
Krupinski, Elizabeth A.; Silverstein, Louis D.; Hashmi, Syed F.; Graham, Anna R.; Weinstein, Ronald S.; Roehrig, Hans
2012-02-01
The use of color LCDs in medical imaging is growing as more clinical specialties use digital images as a resource in diagnosis and treatment decisions. Telemedicine applications such as telepathology, teledermatology and teleophthalmology rely heavily on color images. However, standard methods for calibrating, characterizing and profiling color displays do not exist, resulting in inconsistent presentation. To address this, we developed a calibration, characterization and profiling protocol for color-critical medical imaging applications. Physical characterization of displays calibrated with and without the protocol revealed high color reproduction accuracy with the protocol. The present study assessed the impact of this protocol on observer performance. A set of 250 breast biopsy virtual slide regions of interest (half malignant, half benign) were shown to 6 pathologists, once using the calibration protocol and once using the same display in its "native" off-the-shelf uncalibrated state. Diagnostic accuracy and time to render a decision were measured. In terms of ROC performance, Az (area under the curve) calibrated = 0.8640; uncalibrated = 0.8558. No statistically significant difference (p = 0.2719) was observed. In terms of interpretation speed, mean calibrated = 4.895 sec, mean uncalibrated = 6.304 sec which is statistically significant (p = 0.0460). Early results suggest a slight advantage diagnostically for a properly calibrated and color-managed display and a significant potential advantage in terms of improved workflow. Future work should be conducted using different types of color images that may be more dependent on accurate color rendering and a wider range of LCDs with varying characteristics.
NASA Astrophysics Data System (ADS)
Bogani, F.; Borchi, E.; Bruzzi, M.; Leroy, C.; Sciortino, S.
1997-02-01
The thermoluminescent (TL) response of Chemical Vapour Deposited (CVD) diamond films to beta irradiation has been investigated. A numerical curve-fitting procedure, calibrated by means of a set of LiF TLD100 experimental spectra, has been developed to deconvolute the complex structured TL glow curves. The values of the activation energy and of the frequency factor related to each of the TL peaks involved have been determined. The TL response of the CVD diamond films to beta irradiation has been compared with the TL response of a set of LiF TLD100 and TLD700 dosimeters. The results have been discussed and compared in view of an assessment of the efficiency of CVD diamond films in future applications as in vivo dosimeters.
NASA Technical Reports Server (NTRS)
Green, Robert O.; Conel, James E.; Vandenbosch, Jeannette; Shimada, Masanobu
1993-01-01
We describe an experiment to calibrate the optical sensor (OPS) on board the Japanese Earth Resources Satellite-1 with data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). On 27 Aug. 1992 both the OPS and AVIRIS acquired data concurrently over a calibration target on the surface of Rogers Dry Lake, California. The high spectral resolution measurements of AVIRIS have been convolved to the spectral response curves of the OPS. These data in conjunction with the corresponding OPS digitized numbers have been used to generate the radiometric calibration coefficients for the eight OPS bands. This experiment establishes the suitability of AVIRIS for the calibration of spaceborne sensors in the 400 to 2500 nm spectral region.
NASA Astrophysics Data System (ADS)
Petroselli, A.; Grimaldi, S.; Romano, N.
2012-12-01
The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.
NASA Astrophysics Data System (ADS)
Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen
2016-04-01
Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.
Carosella, Victorio C; Navia, Jose L; Al-Ruzzeh, Sharif; Grancelli, Hugo; Rodriguez, Walter; Cardenas, Cesar; Bilbao, Jorge; Nojek, Carlos
2009-08-01
This study aims to develop the first Latin-American risk model that can be used as a simple, pocket-card graphic score at bedside. The risk model was developed on 2903 patients who underwent cardiac surgery at the Spanish Hospital of Buenos Aires, Argentina, between June 1994 and December 1999. Internal validation was performed on 708 patients between January 2000 and June 2001 at the same center. External validation was performed on 1087 patients between February 2000 and January 2007 at three other centers in Argentina. In the development dataset the area under receiver operating characteristics (ROC) curve was 0.73 and the Hosmer-Lemeshow (HL) test was P=0.88. In the internal validation ROC curve was 0.77. In the external validation ROC curve was 0.81, but imperfect calibration was detected because the observed in-hospital mortality (3.96%) was significantly lower than the development dataset (8.20%) (P<0.0001). Recalibration was done in 2007, showing excellent level of agreement between the observed and predicted mortality rates on all patients (P=0.92). This is the first risk model for cardiac surgery developed in a population of Latin-America with both internal and external validation. A simple graphic pocket-card score allows an easy bedside application with acceptable statistic precision.
NASA Astrophysics Data System (ADS)
Pool, Sandra; Viviroli, Daniel; Seibert, Jan
2017-11-01
Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve simulations close to a well-informed model calibration. The differences among such strategies covering the full range of runoff magnitudes were small indicating that the exact choice of a strategy might be less crucial. Our study corroborates the information value of a small number of strategically selected runoff measurements for simulating runoff with a bucket-type runoff model in almost ungauged catchments.
Calibration-free optical chemical sensors
DeGrandpre, Michael D.
2006-04-11
An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.
NASA Astrophysics Data System (ADS)
Munoz, Joshua
The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheelmounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to highfidelity distance and curvature data through utilization of processor clock rate and left-and rightrail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a "Hy-Rail" utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing highaccuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged.
Use of a Stanton Tube for Skin-Friction Measurements
NASA Technical Reports Server (NTRS)
Abarbanel, S. S.; Hakkinen, R. J.; Trilling, L.
1959-01-01
A small total-pressure tube resting against a flat-plate surface was used as a Stanton tube and calibrated as a skin-friction meter at various subsonic and supersonic speeds. Laminar flow was maintained for the supersonic runs at a Mach number M(sub infinity) of 2. At speeds between M(sub infinity) = 1.33 and M(sub infinity) = 1.87, the calibrations were carried-out in a turbulent boundary layer. The subsonic flows were found to be in transition. The skin-friction readings of a floating-element type of balance served as the reference values against which the Stanton tube was calibrated. A theoretical model was developed which, for moderate values of the shear parameter tau, accurately predicts the performance of the Stanton tube in subsonic and supersonic flows. A "shear correction factor" was found to explain the deviations from the basic model when T became too large. Compressibility effects were important only in the case of turbulent supersonic flows, and they did not alter the form of the calibration curve. The test Reynolds numbers, based on the distance from the leading edge and free-stream conditions, ranged from 70,000 to 875,000. The turbulent-boundary-layer Reynolds numbers, based on momentum thickness, varied between 650 and 2,300. Both laminar and turbulent velocity profiles were taken and the effect of pressure gradient on the calibration was investigated.
Kumar, Abhinav; Gangadharan, Bevin; Cobbold, Jeremy; Thursz, Mark; Zitzmann, Nicole
2017-09-21
LC-MS and immunoassay can detect protein biomarkers. Immunoassays are more commonly used but can potentially be outperformed by LC-MS. These techniques have limitations including the necessity to generate separate calibration curves for each biomarker. We present a rapid mass spectrometry-based assay utilising a universal calibration curve. For the first time we analyse clinical samples using the HeavyPeptide IGNIS kit which establishes a 6-point calibration curve and determines the biomarker concentration in a single LC-MS acquisition. IGNIS was tested using apolipoprotein F (APO-F), a potential biomarker for non-alcoholic fatty liver disease (NAFLD). Human serum and IGNIS prime peptides were digested and the IGNIS assay was used to quantify APO-F in clinical samples. Digestion of IGNIS prime peptides was optimised using trypsin and SMART Digest™. IGNIS was 9 times faster than the conventional LC-MS method for determining the concentration of APO-F in serum. APO-F decreased across NAFLD stages. Inter/intra-day variation and stability post sample preparation for one of the peptides was ≤13% coefficient of variation (CV). SMART Digest™ enabled complete digestion in 30 minutes compared to 24 hours using in-solution trypsin digestion. We have optimised the IGNIS kit to quantify APO-F as a NAFLD biomarker in serum using a single LC-MS acquisition.
Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph
2001-01-01
Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223
Monitoring forest land from high altitude and from space
NASA Technical Reports Server (NTRS)
1972-01-01
The significant findings are reported for remote sensing of forest lands conducted during the period October 1, 1965 to December 31, 1972. Forest inventory research included the use of aircraft and space imagery for forest and nonforest land classification, and land use classification by automated procedures, multispectral scanning, and computerized mapping. Forest stress studies involved previsual detection of ponderosa pine under stress from insects and disease, bark bettle infestations in the Black Hills, and root disease impacts on forest stands. Standardization and calibration studies were made to develop a field test of an ERTS-matched four-channel spectrometer. Calibration of focal plane shutters and mathematical modeling of film characteristic curves were also studied. Documents published as a result of all forestry studies funded by NASA for the Earth Resources Survey Program from 1965 through 1972 are listed.
Calibration and comparison of accelerometer cut points in preschool children.
van Cauwenberghe, Eveline; Labarque, Valery; Trost, Stewart G; de Bourdeaudhuij, Ilse; Cardon, Greet
2011-06-01
The present study aimed to develop accelerometer cut points to classify physical activities (PA) by intensity in preschoolers and to investigate discrepancies in PA levels when applying various accelerometer cut points. To calibrate the accelerometer, 18 preschoolers (5.8 ± 0.4 years) performed eleven structured activities and one free play session while wearing a GT1M ActiGraph accelerometer using 15 s epochs. The structured activities were chosen based on the direct observation system Children's Activity Rating Scale (CARS) while the criterion measure of PA intensity during free play was provided using a second-by-second observation protocol (modified CARS). Receiver Operating Characteristic (ROC) curve analyses were used to determine the accelerometer cut points. To examine the classification differences, accelerometer data of four consecutive days from 114 preschoolers (5.5 ± 0.3 years) were classified by intensity according to previously published and the newly developed accelerometer cut points. Differences in predicted PA levels were evaluated using repeated measures ANOVA and Chi Square test. Cut points were identified at 373 counts/15 s for light (sensitivity: 86%; specificity: 91%; Area under ROC curve: 0.95), 585 counts/15 s for moderate (87%; 82%; 0.91) and 881 counts/15 s for vigorous PA (88%; 91%; 0.94). Further, applying various accelerometer cut points to the same data resulted in statistically and biologically significant differences in PA. Accelerometer cut points were developed with good discriminatory power for differentiating between PA levels in preschoolers and the choice of accelerometer cut points can result in large discrepancies.
Lin, Jie; Carter, Corey A; McGlynn, Katherine A; Zahm, Shelia H; Nations, Joel A; Anderson, William F; Shriver, Craig D; Zhu, Kangmin
2015-12-01
Accurate prognosis assessment after non-small-cell lung cancer (NSCLC) diagnosis is an essential step for making effective clinical decisions. This study is aimed to develop a prediction model with routinely available variables to assess prognosis in patients with NSCLC in the U.S. Military Health System. We used the linked database from the Department of Defense's Central Cancer Registry and the Military Health System Data Repository. The data set was randomly and equally split into a training set to guide model development and a testing set to validate the model prediction. Stepwise Cox regression was used to identify predictors of survival. Model performance was assessed by calculating area under the receiver operating curves and construction of calibration plots. A simple risk scoring system was developed to aid quick risk score calculation and risk estimation for NSCLC clinical management. The study subjects were 5054 patients diagnosed with NSCLC between 1998 and 2007. Age, sex, tobacco use, tumor stage, histology, surgery, chemotherapy, peripheral vascular disease, cerebrovascular disease, and diabetes mellitus were identified as significant predictors of survival. Calibration showed high agreement between predicted and observed event rates. The area under the receiver operating curves reached 0.841, 0.849, 0.848, and 0.838 during 1, 2, 3, and 5 years, respectively. This is the first NSCLC prognosis model for quick risk assessment within the Military Health System. After external validation, the model can be translated into clinical use both as a web-based tool and through mobile applications easily accessible to physicians, patients, and researchers.
[Spectrometric assessment of thyroid depth within the radioiodine test].
Rink, T; Bormuth, F-J; Schroth, H-J; Braun, S; Zimny, M
2005-01-01
Aim of this study is the validation of a simple method for evaluating the depth of the target volume within the radioiodine test by analyzing the emitted iodine-131 energy spectrum. In a total of 250 patients (102 with a solitary autonomous nodule, 66 with multifocal autonomy, 29 with disseminated autonomy, 46 with Graves' disease, 6 for reducing goiter volume and 1 with only partly resectable papillary thyroid carcinoma), simultaneous uptake measurements in the Compton scatter (210 +/- 110 keV) and photopeak (364-45/+55 keV) windows were performed over one minute 24 hours after application of the 3 MBq test dose, with subsequent calculation of the respective count ratios. Measurements with a water-filled plastic neck phantom were carried out to perceive the relationship between these quotients and the average source depth and to get a calibration curve for calculating the depth of the target volume in the 250 patients for comparison with the sonographic reference data. Another calibration curve was obtained by evaluating the results of 125 randomly selected patient measurements to calculate the source depth in the other half of the group. The phantom measurements revealed a highly significant correlation (r = 0,99) between the count ratios and the source depth. Using these calibration data, a good relationship (r = 0,81, average deviation 6 mm corresponding to 22%) between the spectrometric and the sonographic depths was obtained. When using the calibration curve resulting from the 125 patient measurements, the overage deviation in the other half of the group was only 3 mm (12%). There was no difference between the disease groups. The described method allows on easy to use depth correction of the uptake measurements providing good results.
Cellular Oxygen and Nutrient Sensing in Microgravity Using Time-Resolved Fluorescence Microscopy
NASA Technical Reports Server (NTRS)
Szmacinski, Henryk
2003-01-01
Oxygen and nutrient sensing is fundamental to the understanding of cell growth and metabolism. This requires identification of optical probes and suitable detection technology without complex calibration procedures. Under this project Microcosm developed an experimental technique that allows for simultaneous imaging of intra- and inter-cellular events. The technique consists of frequency-domain Fluorescence Lifetime Imaging Microscopy (FLIM), a set of identified oxygen and pH probes, and methods for fabrication of microsensors. Specifications for electronic and optical components of FLIM instrumentation are provided. Hardware and software were developed for data acquisition and analysis. Principles, procedures, and representative images are demonstrated. Suitable lifetime sensitive oxygen, pH, and glucose probes for intra- and extra-cellular measurements of analyte concentrations have been identified and tested. Lifetime sensing and imaging have been performed using PBS buffer, culture media, and yeast cells as a model systems. Spectral specifications, calibration curves, and probes availability are also provided in the report.
Fracture resistance of a TiB2 particle/SiC matrix composite at elevated temperature
NASA Technical Reports Server (NTRS)
Jenkins, Michael G.; Salem, Jonathan A.; Seshadri, Srinivasa G.
1988-01-01
The fracture resistance of a comercial TiB2 particle/SiC matrix composite was evaluated at temperatures ranging from 20 to 1400 C. A laser interferometric strain gauge (LISG) was used to continuously monitor the crack mouth opening displacement (CMOD) of the chevron-notched and straight-notched, three-point bend specimens used. Crack growth resistance curves (R-curves) were determined from the load versus displacement curves and displacement calibrations. Fracture toughness, work-of-fracture, and R-curve levels were found to decrease with increasing temperature. Microstructure, fracture surface, and oxidation coat were examined to explain the fracture behavior.
Fracture resistance of a TiB2 particle/SiC matrix composite at elevated temperature
NASA Technical Reports Server (NTRS)
Jenkins, Michael G.; Salem, Jonathan A.; Seshadri, Srinivasa G.
1989-01-01
The fracture resistance of a commercial TiB2 particle/SiC matrix composite was evaluated at temperatures ranging from 20 to 1400 C. A laser interferometric strain gauge (LiSG) was used to continuously monitor the crack mouth opening displacement (CMOD) of the chevron-notched and straight-notched, three-point bend specimens used. Crack growth resistance curves (R-curves) were determined from the load versus displacement curves and displacement calibrations. Fracture toughness, work-of-fracture, and R-curve levels were found to decrease with increasing temperature. Microstructure, fracture surface, and oxidation coat were examined to explain the fracture behavior.
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.
2012-04-01
The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.
Fault detection and diagnosis of photovoltaic systems
NASA Astrophysics Data System (ADS)
Wu, Xing
The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.
Measurement of large steel plates based on linear scan structured light scanning
NASA Astrophysics Data System (ADS)
Xiao, Zhitao; Li, Yaru; Lei, Geng; Xi, Jiangtao
2018-01-01
A measuring method based on linear structured light scanning is proposed to achieve the accurate measurement of the complex internal shape of large steel plates. Firstly, by using a calibration plate with round marks, an improved line scanning calibration method is designed. The internal and external parameters of camera are determined through the calibration method. Secondly, the images of steel plates are acquired by line scan camera. Then the Canny edge detection method is used to extract approximate contours of the steel plate images, the Gauss fitting algorithm is used to extract the sub-pixel edges of the steel plate contours. Thirdly, for the problem of inaccurate restoration of contour size, by measuring the distance between adjacent points in the grid of known dimensions, the horizontal and vertical error curves of the images are obtained. Finally, these horizontal and vertical error curves can be used to correct the contours of steel plates, and then combined with the calibration parameters of internal and external, the size of these contours can be calculated. The experiments results demonstrate that the proposed method can achieve the error of 1 mm/m in 1.2m×2.6m field of view, which has satisfied the demands of industrial measurement.
McJimpsey, Erica L.
2016-01-01
The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves. PMID:26911983
NASA Astrophysics Data System (ADS)
Tran, Quoc Anh; Chevalier, Bastien; Benz, Miguel; Breul, Pierre; Gourvès, Roland
2017-06-01
The recent technological developments made on the light dynamic penetration test Panda 3 ® provide a dynamic load-penetration curve σp - sp for each impact. This curve is influenced by the mechanical and physical properties of the investigated granular media. In order to analyze and exploit the load-penetration curve, a numerical model of penetration test using 3D Discrete Element Method is proposed for reproducing tests in dynamic conditions in granular media. All parameters of impact used in this model have at first been calibrated by respecting mechanical and geometrical properties of the hammer and the rod. There is a good agreement between experimental results and the ones obtained from simulations in 2D or 3D. After creating a sample, we will simulate the Panda 3 ®. It is possible to measure directly the dynamic load-penetration curve occurring at the tip for each impact. Using the force and acceleration measured in the top part of the rod, it is possible to separate the incident and reflected waves and then calculate the tip's load-penetration curve. The load-penetration curve obtained is qualitatively similar with that obtained by experimental tests. In addition, the frequency analysis of the measured signals present also a good compliance with that measured in reality when the tip resistance is qualitatively similar.
Medellín-Azuara, Josué; Harou, Julien J; Howitt, Richard E
2010-11-01
Given the high proportion of water used for agriculture in certain regions, the economic value of agricultural water can be an important tool for water management and policy development. This value is quantified using economic demand curves for irrigation water. Such demand functions show the incremental contribution of water to agricultural production. Water demand curves are estimated using econometric or optimisation techniques. Calibrated agricultural optimisation models allow the derivation of demand curves using smaller datasets than econometric models. This paper introduces these subject areas then explores the effect of spatial aggregation (upscaling) on the valuation of water for irrigated agriculture. A case study from the Rio Grande-Rio Bravo Basin in North Mexico investigates differences in valuation at farm and regional aggregated levels under four scenarios: technological change, warm-dry climate change, changes in agricultural commodity prices, and water costs for agriculture. The scenarios consider changes due to external shocks or new policies. Positive mathematical programming (PMP), a calibrated optimisation method, is the deductive valuation method used. An exponential cost function is compared to the quadratic cost functions typically used in PMP. Results indicate that the economic value of water at the farm level and the regionally aggregated level are similar, but that the variability and distributional effects of each scenario are affected by aggregation. Moderately aggregated agricultural production models are effective at capturing average-farm adaptation to policy changes and external shocks. Farm-level models best reveal the distribution of scenario impacts. Copyright © 2009 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, S; Kim, K; Jung, H
Purpose: The small animal irradiator has been used with small animals to optimize new radiation therapy as preclinical studies. The small animal was irradiated by whole- or partial-body exposure. In this study, the dosimetric characterizations of small animal irradiator were carried out in small field using Radiochromic films Material & Methods: The study was performed in commercial animal irradiator (XRAD-320, Precision x-ray Inc, North Brantford) with Radiochromic films (EBT2, Ashland Inc, Covington). The calibration curve was generated between delivery dose and optical density (red channel) and the films were scanned by and Epson 1000XL scanner (Epson America Inc., Long Beach,more » CA).We evaluated dosimetric characterization of irradiator using various filter supported by manufacturer in 260 kV. The various filters were F1 (2.0mm Aluminum (HVL = about 1.0mm Cu) and F2 (0.75mm Tin + 0.25mm Copper + 1.5mm Aluminum (HVL = about 3.7mm Cu). According to collimator size (3, 5, 7, 10 mm, we calculated percentage depth dose (PDD) and the surface –source distance(SSD) was 17.3 cm considering dose rate. Results: The films were irradiated in 260 kV, 10mA and we increased exposure time 5sec. intervals from 5sec. to 120sec. The calibration curve of films was fitted with cubic function. The correlation between optical density and dose was Y=0.1405 X{sup 3}−2.916 X{sup 2}+25.566 x+2.238 (R{sup 2}=0.994). Based on the calibration curve, we calculated PDD in various filters depending on collimator size. When compared PDD of specific depth (3mm) considering animal size, the difference by collimator size was 4.50% in free filter and F1 was 1.53% and F2 was within 2.17%. Conclusion: We calculated PDD curve in small animal irradiator depending on the collimator size and the kind of filter using the radiochromic films. The various PDD curve was acquired and it was possible to irradiate various dose using these curve.« less
WE-D-17A-06: Optically Stimulated Luminescence Detectors as ‘LET-Meters’ in Proton Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, D; Sahoo, N; Sawakuchi, GO
Purpose: To demonstrate and evaluate the potential of optically stimulated luminescence (OSL) detectors (OSLDs) for measurements of linear energy transfer (LET) in therapeutic proton beams. Methods: Batches of Al2O2:C OSLDs were irradiated with an absorbed dose of 0.2 Gy in un-modulated proton beams of varying LET (0.67 keV/μm to 2.58 keV/μm). The OSLDs were read using continuous wave (CW-OSL) and pulsed (P-OSL) stimulation modes. We parameterized and calibrated three characteristics of the OSL signals as functions of LET: CW-OSL curve shape, P-OSL curve shape and the ratio of the two OSL emission band intensities (ultraviolet/blue ratio). Calibration curves were createdmore » for each of these characteristics to describe their behaviors as functions of LET. The true LET values were determined using a validated Monte Carlo model of the proton therapy nozzle used to irradiate the OSLDs. We then irradiated batches of OSLDs with an absorbed dose of 0.2 Gy at various depths in two modulated proton beams (140 MeV, 4 cm wide spread-out Bragg peak (SOBP) and 250 MeV, 10 cm wide SOBP). The LET values were calculated using the OSL response and the calibration curves. Finally, measured LET values were compared to the true values determined using Monte Carlo simulations. Results: The CW-OSL curve shape, P-OSL curve shape and the ultraviolet/blue-ratio provided proton LET estimates within 12.4%, 5.7% and 30.9% of the true values, respectively. Conclusion: We have demonstrated that LET can be measured within 5.7% using Al2O3:C OSLDs in the therapeutic proton beams used in this investigation. From a single OSLD readout, it is possible to measure both the absorbed dose and LET. This has potential future applications in proton therapy quality assurance, particularly for treatment plans based on optimization of LET distributions. This research was partially supported by the Natural Sciences and Engineering Research Council of Canada.« less
Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi
2017-01-01
Objective Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Design Prospective cohort study. Setting General medicine departments of three teaching hospitals in Japan. Participants A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. Main outcome measures The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Results Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0–5.3), negative likelihood ratio of 0.4 (0.2–0.7) and OR of 7.7 (3.0–19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Conclusions Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. PMID:29122806
Status of the TESS Science Processing Operations Center
NASA Technical Reports Server (NTRS)
Jenkins, Jon M.; Twicken, Joseph D.; Campbell, Jennifer; Tenebaum, Peter; Sanderfer, Dwight; Davies, Misty D.; Smith, Jeffrey C.; Morris, Rob; Mansouri-Samani, Masoud; Girouardi, Forrest;
2017-01-01
The Transiting Exoplanet Survey Satellite (TESS) science pipeline is being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center based on the highly successful Kepler Mission science pipeline. Like the Kepler pipeline, the TESS science pipeline will provide calibrated pixels, simple and systematic error-corrected aperture photometry, and centroid locations for all 200,000+ target stars, observed over the 2-year mission, along with associated uncertainties. The pixel and light curve products are modeled on the Kepler archive products and will be archived to the Mikulski Archive for Space Telescopes (MAST). In addition to the nominal science data, the 30-minute Full Frame Images (FFIs) simultaneously collected by TESS will also be calibrated by the SPOC and archived at MAST. The TESS pipeline will search through all light curves for evidence of transits that occur when a planet crosses the disk of its host star. The Data Validation pipeline will generate a suite of diagnostic metrics for each transit-like signature discovered, and extract planetary parameters by fitting a limb-darkened transit model to each potential planetary signature. The results of the transit search will be modeled on the Kepler transit search products (tabulated numerical results, time series products, and pdf reports) all of which will be archived to MAST.
Distinguishing Biologically Relevant Hexoses by Water Adduction to the Lithium-Cationized Molecule.
Campbell, Matthew T; Chen, Dazhe; Wallbillich, Nicholas J; Glish, Gary L
2017-10-03
A method to distinguish the four most common biologically relevant underivatized hexoses, d-glucose, d-galactose, d-mannose, and d-fructose, using only mass spectrometry with no prior separation/derivatization step has been developed. Electrospray of a solution containing hexose and a lithium salt generates [Hexose+Li] + . The lithium-cationized hexoses adduct water in a quadrupole ion trap. The rate of this water adduction reaction can be used to distinguish the four hexoses. Additionally, for each hexose, multiple lithiation sites are possible, allowing for multiple structures of [Hexose+Li] + . Electrospray produces at least one structure that reacts with water and at least one that does not. The ratio of unreactive lithium-cationized hexose to total lithium-cationized hexose is unique for the four hexoses studied, providing a second method for distinguishing the isomers. Use of the water adduction reaction rate or the unreactive ratio provides two separate methods for confidently (p ≤ 0.02) distinguishing the most common biologically relevant hexoses using only femtomoles of hexose. Additionally, binary mixtures of glucose and fructose were studied. A calibration curve was created by measuring the reaction rate of various samples with different ratios of fructose and glucose. The calibration curve was used to accurately measure the percentage of fructose in three samples of high fructose corn syrup (<4% error).
Pressure-Water Content Relations for a Sandy, Granitic Soil Under Field and Laboratory Conditions
NASA Astrophysics Data System (ADS)
Chandler, D. G.; McNamara, J. M.; Gribb, M. M.
2001-12-01
A new sensor was developed to measure soil water potential in order to determine the predominant mechanisms of snowmelt delivery to streamflow. The sensors were calibrated for +50 to -300 cm for application on steep granitic slopes and deployed at three depths and 2 locations on a slope in a headwater catchment of the Idaho Batholith throughout the 2001 snowmelt season. Soil moisture was measured simultaneously with Water Content Reflectometers (Cambell Scientific, Logan, UT), that were calibrated in situ with Time Domain Reflectometry measurements. Sensor performance was evaluated in a laboratory soil column via side-by-side monitoring during injection of water with a cone permeameter. Soil characteristic curves were also determined for the field site by multi-step outflow tests. Comparison of the results from the field study to those from the laboratory experiment and to the characteristic curves demonstrate the utility of the new sensor for recording dynamic changes in soil water status. During snowmelt, the sensor responded to both matric potential and bypass-flow pore potential. Large shifts in the pressure record that correspond to changes in the infiltration flux indicate initiation and cessation of macropore flow. The pore pressure records may be used to document the frequency, timing and duration of bypass flow that are not apparent from the soil moisture records.
NASA Technical Reports Server (NTRS)
Schanzer, Dena; Staenz, Karl
1992-01-01
An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data set acquired over Canal Flats, B.C., on 14 Aug. 1990, was used for the purpose of developing methodologies for surface reflectance retrieval using the 5S atmospheric code. A scene of Rogers Dry Lake, California (23 Jul. 1990), acquired within three weeks of the Canal Flats scene, was used as a potential reference for radiometric calibration purposes and for comparison with other studies using primarily LOWTRAN7. Previous attempts at surface reflectance retrieval indicated that reflectance values in the gaseous absorption bands had the poorest accuracy. Modifications to 5S to use 1 nm step size, in order to make fuller use of the 20 cm(sup -1) resolution of the gaseous absorption data, resulted in some improvement in the accuracy of the retrieved surface reflectance. Estimates of precipitable water vapor using non-linear least squares regression and simple ratioing techniques such as the CIBR (Continuum Interpolated Band Ratio) technique or the narrow/wide technique, which relate ratios of combinations of bands to precipitable water vapor through calibration curves, were found to vary widely. The estimates depended on the bands used for the estimation; none provided entirely satisfactory surface reflectance curves.
Pleural tissue hyaluronan produced by postmortem ventilation in rabbits.
Wang, P M; Lai-Fook, S J
2000-01-01
We developed a method that used Alcian blue bound to hyaluronan to measure pleural hyaluronan in rabbits postmortem. Rabbits were killed, then ventilated with 21% O2--5% CO2--74% N2 for 3 h. The pleural liquid was removed by suction and 5 ml Alcian blue stock solution (0.33 mg/ml, 3.3 pH) was injected into each chest cavity. After 10 min, the Alcian blue solution was removed and the unbound Alcian blue solution (supernatant) separated by centrifugation and filtration. The supernatant transmissibility (T) was measured spectrophotometrically at 613 nm. Supernatant Alcian blue concentration (Cab) was obtained from a calibration curve of T versus dilutions of stock solution Cab. Alcian blue bound to pleural tissue hyaluronan was obtained by subtracting supernatant Cab from stock solution Cab. Pleural tissue hyaluronan was obtained from a calibration curve of hyaluronan versus Alcian blue bound to hyaluronan. Compared with control rabbits, pleural tissue hyaluronan (0.21 +/- 0.04 mg/kg) increased twofold, whereas pleural liquid volume decreased by 30% after 3 h of ventilation. Pleural effusions present 3 h postmortem without ventilation did not change pleural tissue hyaluronan from control values. Thus ventilation-induced pleural liquid shear stress, not increased filtration, was the stimulus for the increased hyaluronan produced from pleural mesothelial cells.
LCMS analysis of fingerprints, the amino acid profile of 20 donors.
de Puit, Marcel; Ismail, Mahado; Xu, Xiaoma
2014-03-01
The analysis of amino acids present in fingerprints has been studied several times. In this paper, we report a method for the analysis of amino acids using an fluorenylmethyloxycarbonyl chloride-derivatization for LC separation and MS detection. We have obtained good results with regard to the calibration curves and the limit of detection and LOQ for the target compounds. The extraction of the amino acids from the substrates used proved to be very efficient. Analysis of the derivatized amino acids enabled us to obtain full amino acid profiles for 20 donors. The intervariability is as expected rather large, with serine as the most abundant constituent, and when examining the total profile of the amino acids per donor, a characteristic pattern can be observed. Some amino acids were not detected in some donors, or fell out of the range of the calibration curve, where others showed a surprisingly high amount of material in the deposition analyses. Further investigations will have to address the intravariability of the amino acid profiles of the fingerprints from donors. By the development of the analytical method and the application to the analysis of fingerprints, we were able to gain insight in the variability of the constituents of fingerprints between the donors. © 2013 American Academy of Forensic Sciences.
Determination of Flavonoids in Wine by High Performance Liquid Chromatography
NASA Astrophysics Data System (ADS)
da Queija, Celeste; Queirós, M. A.; Rodrigues, Ligia M.
2001-02-01
The experiment presented is an application of HPLC to the analysis of flavonoids in wines, designed for students of instrumental methods. It is done in two successive 4-hour laboratory sessions. While the hydrolysis of the wines is in progress, the students prepare the calibration curves with standard solutions of flavonoids and calculate the regression lines and correlation coefficients. During the second session they analyze the hydrolyzed wine samples and calculate the concentrations of the flavonoids using the calibration curves obtained earlier. This laboratory work is very attractive to students because they deal with a common daily product whose components are reported to have preventive and therapeutic effects. Furthermore, students can execute preparative work and apply a more elaborate technique that is nowadays an indispensable tool in instrumental analysis.
Light curves of flat-spectrum radio sources (Jenness+, 2010)
NASA Astrophysics Data System (ADS)
Jenness, T.; Robson, E. I.; Stevens, J. A.
2010-05-01
Calibrated data for 143 flat-spectrum extragalactic radio sources are presented at a wavelength of 850um covering a 5-yr period from 2000 April. The data, obtained at the James Clerk Maxwell Telescope using the Submillimetre Common-User Bolometer Array (SCUBA) camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control - Data Reduction (ORAC-DR) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves. A re-analysis of previously published data from 1997 to 2000 is also presented. The combined catalogue, comprising 10493 flux density measurements, provides a unique and valuable resource for studies of extragalactic radio sources. (2 data files).
Statistical behavior of ten million experimental detection limits
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-02-01
Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.
Ikeda, Kayo; Ikawa, Kazuro; Yokoshige, Satoko; Yoshikawa, Satoshi; Morikawa, Norifumi
2014-12-01
A simple and sensitive gas chromatography-electron ionization-mass spectrometry (GC-EI-MS) method using dried plasma spot testing cards was developed for determination of valproic acid and gabapentin concentrations in human plasma from patients receiving in-home medical care. We have proposed that a simple, easy and dry sampling method is suitable for in-home medical patients for therapeutic drug monitoring. Therefore, in the present study, we used recently developed commercially available easy handling cards: Whatman FTA DMPK-A and Bond Elut DMS. In-home medical care patients can collect plasma using these simple kits. The spots of plasma on the cards were extracted into methanol and then evaporated to dryness. The residues were trimethylsilylated using N-methyl-N-trimethylsilyltrifluoroacetamide. For GC-EI-MS analysis, the calibration curves on both cards were linear from 10 to 200 µg/mL for valproic acid, and from 0.5 to 10 µg/mL for gabapentin. Intra- and interday precisions in plasma were both ≤13.0% (coefficient of variation), and the accuracy was between 87.9 and 112% for both cards within the calibration curves. The limits of quantification were 10 µg/mL for valproic acid and 0.5 µg/mL for gabapentin on both cards. We believe that the present method will be useful for in-home medical care. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Grafton, S. B.; Lutze, F. H.
1981-01-01
The test capabilities of the Stability Wind Tunnel of the Virginia Polytechnic Institute and State University are described, and calibrations for curved and rolling flow techniques are given. Oscillatory snaking tests to determine pure yawing derivatives are considered. Representative aerodynamic data obtained for a current fighter configuration using the curved and rolling flow techniques are presented. The application of dynamic derivatives obtained in such tests to the analysis of airplane motions in general, and to high angle of attack flight conditions in particular, is discussed.
Utsumi, Takanobu; Oka, Ryo; Endo, Takumi; Yano, Masashi; Kamijima, Shuichi; Kamiya, Naoto; Fujimura, Masaaki; Sekita, Nobuyuki; Mikami, Kazuo; Hiruta, Nobuyuki; Suzuki, Hiroyoshi
2015-11-01
The aim of this study is to validate and compare the predictive accuracy of two nomograms predicting the probability of Gleason sum upgrading between biopsy and radical prostatectomy pathology among representative patients with prostate cancer. We previously developed a nomogram, as did Chun et al. In this validation study, patients originated from two centers: Toho University Sakura Medical Center (n = 214) and Chibaken Saiseikai Narashino Hospital (n = 216). We assessed predictive accuracy using area under the curve values and constructed calibration plots to grasp the tendency for each institution. Both nomograms showed a high predictive accuracy in each institution, although the constructed calibration plots of the two nomograms underestimated the actual probability in Toho University Sakura Medical Center. Clinicians need to use calibration plots for each institution to correctly understand the tendency of each nomogram for their patients, even if each nomogram has a good predictive accuracy. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Control rod calibration and reactivity effects at the IPEN/MB-01 reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinto, Letícia Negrão; Gonnelli, Eduardo; Santos, Adimir dos
2014-11-11
Researches that aim to improve the performance of neutron transport codes and quality of nuclear cross section databases are very important to increase the accuracy of simulations and the quality of the analysis and prediction of phenomena in the nuclear field. In this context, relevant experimental data such as reactivity worth measurements are needed. Control rods may be made of several neutron absorbing materials that are used to adjust the reactivity of the core. For the reactor operation, these experimental data are also extremely important: with them it is possible to estimate the reactivity worth by the movement of themore » control rod, understand the reactor response at each rod position and to operate the reactor safely. This work presents a temperature correction approach for the control rod calibration problem. It is shown the control rod calibration data of the IPEN/MB-01 reactor, the integral and differential reactivity curves and a theoretical analysis, performed by the MCNP-5 reactor physics code, developed and maintained by Los Alamos National Laboratory, using the ENDF/B-VII.0 nuclear data library.« less
Ajtony, Zsolt; Laczai, Nikoletta; Dravecz, Gabriella; Szoboszlai, Norbert; Marosi, Áron; Marlok, Bence; Streli, Christina; Bencs, László
2016-12-15
HR-CS-GFAAS methods were developed for the fast determination of Cu in domestic and commercially available Hungarian distilled alcoholic beverages (called pálinka), in order to decide if their Cu content exceeds the permissible limit, as legislated by the WHO. Some microliters of samples were directly dispensed into the atomizer. Graphite furnace heating programs, effects/amounts of the Pd modifier, alternative wavelengths (e.g., Cu I 249.2146nm), external calibration and internal standardization methods were studied. Applying a fast graphite furnace heating program without any chemical modifier, the Cu content of a sample could be quantitated within 1.5min. The detection limit of the method is 0.03mg/L. Calibration curves are linear up to 10-15mg/L Cu. Spike-recoveries ranged from 89% to 119% with an average of 100.9±8.5%. Internal calibration could be applied with the assistance of Cr, Fe, and/or Rh standards. The accuracy of the GFAAS results was verified by TXRF analyses. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pilkonis, Paul A.; Yu, Lan; Dodds, Nathan E.; Johnston, Kelly L.; Lawrence, Suzanne; Hilton, Thomas F.; Daley, Dennis C.; Patkar, Ashwin A.; McCarty, Dennis
2015-01-01
Background Two item banks for substance use were developed as part of the Patient-Reported Outcomes Measurement Information System (PROMIS®): severity of substance use and positive appeal of substance use. Methods Qualitative item analysis (including focus groups, cognitive interviewing, expert review, and item revision) reduced an initial pool of more than 5,300 items for substance use to 119 items included in field testing. Items were written in a first-person, past-tense format, with 5 response options reflecting frequency or severity. Both 30-day and 3-month time frames were tested. The calibration sample of 1,336 respondents included 875 individuals from the general population (ascertained through an internet panel) and 461patients from addiction treatment centers participating in the National Drug Abuse Treatment Clinical Trials Network. Results Final banks of 37 and 18 items were calibrated for severity of substance use and positive appeal of substance use, respectively, using the two-parameter graded response model from item response theory (IRT). Initial calibrations were similar for the 30-day and 3-month time frames, and final calibrations used data combined across the time frames, making the items applicable with either interval. Seven-item static short forms were also developed from each item bank. Conclusions Test information curves showed that the PROMIS item banks provided substantial information in a broad range of severity, making them suitable for treatment, observational, and epidemiological research in both clinical and community settings. PMID:26423364
Development of a multichannel hyperspectral imaging probe for food property and quality assessment
NASA Astrophysics Data System (ADS)
Huang, Yuping; Lu, Renfu; Chen, Kunjie
2017-05-01
This paper reports on the development, calibration and evaluation of a new multipurpose, multichannel hyperspectral imaging probe for property and quality assessment of food products. The new multichannel probe consists of a 910 μm fiber as a point light source and 30 light receiving fibers of three sizes (i.e., 50 μm, 105 μm and 200 μm) arranged in a special pattern to enhance signal acquisitions over the spatial distances of up to 36 mm. The multichannel probe allows simultaneous acquisition of 30 spatially-resolved reflectance spectra of food samples with either flat or curved surface over the spectral region of 550-1,650 nm. The measured reflectance spectra can be used for estimating the optical scattering and absorption properties of food samples, as well as for assessing the tissues of the samples at different depths. Several calibration procedures that are unique to this probe were carried out; they included linearity calibrations for each channel of the hyperspectral imaging system to ensure consistent linear responses of individual channels, and spectral response calibrations of individual channels for each fiber size group and between the three groups of different size fibers. Finally, applications of this new multichannel probe were demonstrated through the optical property measurement of liquid model samples and tomatoes of different maturity levels. The multichannel probe offers new capabilities for optical property measurement and quality detection of food and agricultural products.
Kropat, G; Baechler, S; Bailat, C; Barazza, F; Bochud, F; Damet, J; Meyer, N; Palacios Gruson, M; Butterweck, G
2015-11-01
Swiss national requirements for measuring radon gas exposures demand a lower detection limit of 50 kBq h m(-3), representing the Swiss concentration average of 70 Bq m(-3) over a 1-month period. A solid-state nuclear track detector (SSNTD) system (Politrack, Mi.am s.r.l., Italy) has been acquired to fulfil these requirements. This work was aimed at the calibration of the Politrack system with traceability to international standards and the development of a procedure to check the stability of the system. A total of 275 SSNTDs was exposed to 11 different radon exposures in the radon chamber of the Secondary Calibration Laboratory at the Paul Scherrer Institute, Switzerland. The exposures ranged from 50 to 15000 kBq h m(-3). For each exposure of 20 detectors, 5 SSNTDs were used to monitor possible background exposures during transport and storage. The response curve and the calibration factor of the whole system were determined using a Monte Carlo fitting procedure. A device to produce CR39 samples with a reference number of tracks using a (241)Am source was developed for checking the long-term stability of the Politrack system. The characteristic limits for the detection of a possible system drift were determined following ISO Standard 11929. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Germanium resistance thermometer calibration at superfluid helium temperatures
NASA Technical Reports Server (NTRS)
Mason, F. C.
1985-01-01
The rapid increase in resistance of high purity semi-conducting germanium with decreasing temperature in the superfluid helium range of temperatures makes this material highly adaptable as a very sensitive thermometer. Also, a germanium thermometer exhibits a highly reproducible resistance versus temperature characteristic curve upon cycling between liquid helium temperatures and room temperature. These two factors combine to make germanium thermometers ideally suited for measuring temperatures in many cryogenic studies at superfluid helium temperatures. One disadvantage, however, is the relatively high cost of calibrated germanium thermometers. In space helium cryogenic systems, many such thermometers are often required, leading to a high cost for calibrated thermometers. The construction of a thermometer calibration cryostat and probe which will allow for calibrating six germanium thermometers at one time, thus effecting substantial savings in the purchase of thermometers is considered.
The Importance of Calibration in Clinical Psychology.
Lindhiem, Oliver; Petersen, Isaac T; Mentch, Lucas K; Youngstrom, Eric A
2018-02-01
Accuracy has several elements, not all of which have received equal attention in the field of clinical psychology. Calibration, the degree to which a probabilistic estimate of an event reflects the true underlying probability of the event, has largely been neglected in the field of clinical psychology in favor of other components of accuracy such as discrimination (e.g., sensitivity, specificity, area under the receiver operating characteristic curve). Although it is frequently overlooked, calibration is a critical component of accuracy with particular relevance for prognostic models and risk-assessment tools. With advances in personalized medicine and the increasing use of probabilistic (0% to 100%) estimates and predictions in mental health research, the need for careful attention to calibration has become increasingly important.
Stratification of the severity of critically ill patients with classification trees
2009-01-01
Background Development of three classification trees (CT) based on the CART (Classification and Regression Trees), CHAID (Chi-Square Automatic Interaction Detection) and C4.5 methodologies for the calculation of probability of hospital mortality; the comparison of the results with the APACHE II, SAPS II and MPM II-24 scores, and with a model based on multiple logistic regression (LR). Methods Retrospective study of 2864 patients. Random partition (70:30) into a Development Set (DS) n = 1808 and Validation Set (VS) n = 808. Their properties of discrimination are compared with the ROC curve (AUC CI 95%), Percent of correct classification (PCC CI 95%); and the calibration with the Calibration Curve and the Standardized Mortality Ratio (SMR CI 95%). Results CTs are produced with a different selection of variables and decision rules: CART (5 variables and 8 decision rules), CHAID (7 variables and 15 rules) and C4.5 (6 variables and 10 rules). The common variables were: inotropic therapy, Glasgow, age, (A-a)O2 gradient and antecedent of chronic illness. In VS: all the models achieved acceptable discrimination with AUC above 0.7. CT: CART (0.75(0.71-0.81)), CHAID (0.76(0.72-0.79)) and C4.5 (0.76(0.73-0.80)). PCC: CART (72(69-75)), CHAID (72(69-75)) and C4.5 (76(73-79)). Calibration (SMR) better in the CT: CART (1.04(0.95-1.31)), CHAID (1.06(0.97-1.15) and C4.5 (1.08(0.98-1.16)). Conclusion With different methodologies of CTs, trees are generated with different selection of variables and decision rules. The CTs are easy to interpret, and they stratify the risk of hospital mortality. The CTs should be taken into account for the classification of the prognosis of critically ill patients. PMID:20003229
Zeleny, Reinhard; Harbeck, Stefan; Schimmel, Heinz
2009-01-09
A liquid chromatography-electrospray ionisation tandem mass spectrometry method for the simultaneous detection and quantitation of 5-nitroimidazole veterinary drugs in lyophilised pork meat, the chosen format of a candidate certified reference material, has been developed and validated. Six analytes have been included in the scope of validation, i.e. dimetridazole (DMZ), metronidazole (MNZ), ronidazole (RNZ), hydroxymetronidazole (MNZOH), hydroxyipronidazole (IPZOH), and 2-hydroxymethyl-1-methyl-5-nitroimidazole (HMMNI). The analytes were extracted from the sample with ethyl acetate, chromatographically separated on a C(18) column, and finally identified and quantified by tandem mass spectrometry in the multiple reaction monitoring mode (MRM) using matrix-matched calibration and (2)H(3)-labelled analogues of the analytes (except for MNZOH, where [(2)H(3)]MNZ was used). The method was validated in accordance with Commission Decision 2002/657/EC, by determining selectivity, linearity, matrix effect, apparent recovery, repeatability and intermediate precision, decision limits and detection capabilities, robustness of sample preparation method, and stability of extracts. Recovery at 1 microg/kg level was at 100% (estimates in the range of 101-107%) for all analytes, repeatabilities and intermediate precisions at this level were in the range of 4-12% and 2-9%, respectively. Linearity of calibration curves in the working range 0.5-10 microg/kg was confirmed, with r values typically >0.99. Decision limits (CCalpha) and detection capabilities (CCbeta) according to ISO 11843-2 (calibration curve approach) were 0.29-0.44 and 0.36-0.54 microg/kg, respectively. The method reliably identifies and quantifies the selected nitroimidazoles in the reconstituted pork meat in the low and sub-microg/kg range and will be applied in an interlaboratory comparison for determining the mass fraction of the selected nitroimidazoles in the candidate reference material currently developed at IRMM.
Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.
2012-01-01
Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.
High-frequency measurements of aeolian saltation flux: Field-based methodology and applications
NASA Astrophysics Data System (ADS)
Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.
2018-02-01
Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.
New robust bilinear least squares method for the analysis of spectral-pH matrix data.
Goicoechea, Héctor C; Olivieri, Alejandro C
2005-07-01
A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Hashmi, Syed F.; Dallas, William J.; Krupinski, Elizabeth A.; Rehm, Kelly; Fan, Jiahua
2010-08-01
Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ΔE = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target. As an extension of this fundamental work1, we further improved our calibration method by defining concrete calibration parameters for the display, using the NEC wide gamut puck, and making sure that those calibration parameters did conform, with the help of a state of the art Spectroradiometer, PR670. As a result of this addition of the PR670, and also an in-house developed method of profiling and characterization, it appears that there was much improvement in ΔE, the color difference.
The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.
van Battum, L J; Huizenga, H
2006-07-01
Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.
NASA Astrophysics Data System (ADS)
Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi
2014-06-01
This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.
Hickey, Graeme L.; Grant, Stuart W.; Murphy, Gavin J.; Bhabra, Moninder; Pagano, Domenico; McAllister, Katherine; Buchan, Iain; Bridgewater, Ben
2013-01-01
OBJECTIVES Progressive loss of calibration of the original EuroSCORE models has necessitated the introduction of the EuroSCORE II model. Poor model calibration has important implications for clinical decision-making and risk adjustment of governance analyses. The objective of this study was to explore the reasons for the calibration drift of the logistic EuroSCORE. METHODS Data from the Society for Cardiothoracic Surgery in Great Britain and Ireland database were analysed for procedures performed at all National Health Service and some private hospitals in England and Wales between April 2001 and March 2011. The primary outcome was in-hospital mortality. EuroSCORE risk factors, overall model calibration and discrimination were assessed over time. RESULTS A total of 317 292 procedures were included. Over the study period, mean age at surgery increased from 64.6 to 67.2 years. The proportion of procedures that were isolated coronary artery bypass grafts decreased from 67.5 to 51.2%. In-hospital mortality fell from 4.1 to 2.8%, but the mean logistic EuroSCORE increased from 5.6 to 7.6%. The logistic EuroSCORE remained a good discriminant throughout the study period (area under the receiver-operating characteristic curve between 0.79 and 0.85), but calibration (observed-to-expected mortality ratio) fell from 0.76 to 0.37. Inadequate adjustment for decreasing baseline risk affected calibration considerably. DISCUSSIONS Patient risk factors and case-mix in adult cardiac surgery change dynamically over time. Models like the EuroSCORE that are developed using a ‘snapshot’ of data in time do not account for this and can subsequently lose calibration. It is therefore important to regularly revalidate clinical prediction models. PMID:23152436
Color calibration and color-managed medical displays: does the calibration method matter?
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Rehm, Kelly; Silverstein, Louis D.; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.
2010-02-01
Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ▵E = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target.
NASA Astrophysics Data System (ADS)
Rivers, Thane D.
1992-06-01
An Automated Scanning Monochromator was developed using: an Acton Research Corporation (ARC) Monochromator, Ealing Photomultiplier Tube and a Macintosh PC in conjunction with LabVIEW software. The LabVIEW Virtual Instrument written to operate the ARC Monochromator is a mouse driven user friendly program developed for automated spectral data measurements. Resolution and sensitivity of the Automated Scanning Monochromator System were determined experimentally. The Automated monochromator was then used for spectral measurements of a Platinum Lamp. Additionally, the reflectivity curve for a BaSO4 coated screen has been measured. Reflectivity measurements indicate a large discrepancy with expected results. Further analysis of the reflectivity experiment is required for conclusive results.
Wafa Chouaib; Peter V. Caldwell; Younes Alila
2018-01-01
This paper advances the physical understanding of the flow duration curve (FDC) regional variation. It provides a process-based analysis of the interaction between climate and landscape properties to explain disparities in FDC shapes. We used (i) long term measured flow and precipitation data over 73 catchments from the eastern US. (ii) We calibrated the...
NASA Astrophysics Data System (ADS)
Wright, David; Thyer, Mark; Westra, Seth
2015-04-01
Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this study establish the feasibility and importance of including influential point detection diagnostics as a standard tool in hydrological model calibration. They provide the hydrologist with important information on whether model calibration is susceptible to a small number of highly influent data points. This enables the hydrologist to make a more informed decision of whether to (1) remove/retain the calibration data; (2) adjust the calibration strategy and/or hydrological model to reduce the susceptibility of model predictions to a small number of influential observations.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
The precision of a special purpose analog computer in clinical cardiac output determination.
Sullivan, F J; Mroz, E A; Miller, R E
1975-01-01
Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394
High Performance Liquid Chromatography of Vitamin A: A Quantitative Determination.
ERIC Educational Resources Information Center
Bohman, Ove; And Others
1982-01-01
Experimental procedures are provided for the quantitative determination of Vitamin A (retinol) in food products by analytical liquid chromatography. Standard addition and calibration curve extraction methods are outlined. (SK)
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...
2017-09-07
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.
Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2018-01-01
An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, Jorge
2013-12-01
The goal of this study presented is to determine the best available non-destructive technique necessary to collect validation data as well as to determine burn-up and cooling time of the fuel elements onsite at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads3 to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent tomore » the reactor. Once it was establish that useful spectra can be obtained at the ATR canal the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements non-destructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed was used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however in order to enhance the quality of the spectra collected using this scintillator a deconvolution method was developed. Following the development of the deconvolution method for ATR applications the technique was tested using one-isotope, multi-isotope and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr3 detector in an above the water configuration and deconvolution algorithms.« less
PAPER-CHROMATOGRAM MEASUREMENT OF SUBSTANCES LABELLED WITH H$sup 3$ (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, M.
1961-03-01
Compounds labelled with H/sup 3/ can be detected with a paper chromatogram using a methane flow counter with a count yield of 1%. The yield can be estimated from the beta maximum energy. A new double counter was developed which increases the count yield to 2% and also considerably decreases the margin of error. Calibration curves with leucine and glucosamine show satisfactory linearity between measured and applied activity in the range from 4 to 50 x 10/sup -//sup 3/ mu c of H/sup 3/. (auth)
NASA Astrophysics Data System (ADS)
Hardee, John R.; Long, John; Otts, Julie
2002-05-01
A senior-level undergraduate laboratory experiment that demonstrates the use of solid-phase microextraction (SPME) and capillary gas chromatography-mass spectrometry (GC-MS) was developed for the quantitative determination of bromoform in swimming pool water. Bromoform was extracted by SPME from the headspace of vials containing sodium chloride-saturated swimming pool water. Bromoform concentrations were determined from comparisons of peak areas on a student-generated calibration curve. Students compared results to OSHA water and air exposure limits for bromoform.
Neural network modeling for surgical decisions on traumatic brain injury patients.
Li, Y C; Liu, L; Chiu, W T; Jian, W S
2000-01-01
Computerized medical decision support systems have been a major research topic in recent years. Intelligent computer programs were implemented to aid physicians and other medical professionals in making difficult medical decisions. This report compares three different mathematical models for building a traumatic brain injury (TBI) medical decision support system (MDSS). These models were developed based on a large TBI patient database. This MDSS accepts a set of patient data such as the types of skull fracture, Glasgow Coma Scale (GCS), episode of convulsion and return the chance that a neurosurgeon would recommend an open-skull surgery for this patient. The three mathematical models described in this report including a logistic regression model, a multi-layer perceptron (MLP) neural network and a radial-basis-function (RBF) neural network. From the 12,640 patients selected from the database. A randomly drawn 9480 cases were used as the training group to develop/train our models. The other 3160 cases were in the validation group which we used to evaluate the performance of these models. We used sensitivity, specificity, areas under receiver-operating characteristics (ROC) curve and calibration curves as the indicator of how accurate these models are in predicting a neurosurgeon's decision on open-skull surgery. The results showed that, assuming equal importance of sensitivity and specificity, the logistic regression model had a (sensitivity, specificity) of (73%, 68%), compared to (80%, 80%) from the RBF model and (88%, 80%) from the MLP model. The resultant areas under ROC curve for logistic regression, RBF and MLP neural networks are 0.761, 0.880 and 0.897, respectively (P < 0.05). Among these models, the logistic regression has noticeably poorer calibration. This study demonstrated the feasibility of applying neural networks as the mechanism for TBI decision support systems based on clinical databases. The results also suggest that neural networks may be a better solution for complex, non-linear medical decision support systems than conventional statistical techniques such as logistic regression.
A Bionic Polarization Navigation Sensor and Its Calibration Method.
Zhao, Huijie; Xu, Wujian
2016-08-03
The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects' polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor's signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation.
A Bionic Polarization Navigation Sensor and Its Calibration Method
Zhao, Huijie; Xu, Wujian
2016-01-01
The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects’ polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor’s signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation. PMID:27527171
Equilibrium Conformations of Concentric-tube Continuum Robots
Rucker, D. Caleb; Webster, Robert J.; Chirikjian, Gregory S.; Cowan, Noah J.
2013-01-01
Robots consisting of several concentric, preshaped, elastic tubes can work dexterously in narrow, constrained, and/or winding spaces, as are commonly found in minimally invasive surgery. Previous models of these “active cannulas” assume piecewise constant precurvature of component tubes and neglect torsion in curved sections of the device. In this paper we develop a new coordinate-free energy formulation that accounts for general preshaping of an arbitrary number of component tubes, and which explicitly includes both bending and torsion throughout the device. We show that previously reported models are special cases of our formulation, and then explore in detail the implications of torsional flexibility for the special case of two tubes. Experiments demonstrate that this framework is more descriptive of physical prototype behavior than previous models; it reduces model prediction error by 82% over the calibrated bending-only model, and 17% over the calibrated transmissional torsion model in a set of experiments. PMID:25125773
Vig, Attila; Igloi, Attila; Adanyi, Nora; Gyemant, Gyongyi; Csutoras, Csaba; Kiss, Attila
2010-10-01
An amperometric detector and an enzymatic reaction were combined for the measurement of L-ascorbic acid. The enzyme cell (containing immobilized ascorbate oxidase) was connected to a flow injection analyzer (FIA) system with a glassy carbon electrode as an amperometric detector. During optimization and measurements two sample injectors were used, one before and one after the enzyme cell, thus eliminating the background interferences. Subtraction of the signal area given in the presence of enzyme from the one given in the absence of enzyme was applied for measuring analyte concentrations and calibration at 400 mV. Analysis capacity of system is 25 samples/hour. The relative standard deviation (RSD) was below 5% (5 times repeated, 400 μmol/L conc.), linearity up to 400 μmol/L, limit of detection (LOD) 5 μmol/L, fitting of calibration curve in 25-400 μmol/L range was R (2) = 0.99.
A short static-pressure probe design for supersonic flow
NASA Technical Reports Server (NTRS)
Pinckney, S. Z.
1975-01-01
A static-pressure probe design concept was developed which has the static holes located close to the probe tip and is relatively insensitive to probe angle of attack and circumferential static hole location. Probes were constructed with 10 and 20 deg half-angle cone tips followed by a tangent conic curve section and a tangent cone section of 2, 3, or 3.5 deg, and were tested at Mach numbers of 2.5 and 4.0 and angles of attack up to 12 deg. Experimental results indicate that for stream Mach numbers of 2.5 and 4.0 and probe angle of attack within + or - 10 deg, values of stream static pressure can be determined from probe calibration to within about + or - 4 percent. If the probe is aligned within about 7 deg of the flow experimental results indicated, the stream static pressures can be determined to within 2 percent from probe calibration.
Nuclear moisture-density evaluation.
DOT National Transportation Integrated Search
1964-11-01
This report constitutes the results of a series of calibration curves prepared by comparing the Troxler Nuclear Density - Moisture Gauge count ratios with conventional densities as obtained by the Soiltest Volumeter and the sand displacement methods....
Towards Robust Self-Calibration for Handheld 3d Line Laser Scanning
NASA Astrophysics Data System (ADS)
Bleier, M.; Nüchter, A.
2017-11-01
This paper studies self-calibration of a structured light system, which reconstructs 3D information using video from a static consumer camera and a handheld cross line laser projector. Intersections between the individual laser curves and geometric constraints on the relative position of the laser planes are exploited to achieve dense 3D reconstruction. This is possible without any prior knowledge of the movement of the projector. However, inaccurrately extracted laser lines introduce noise in the detected intersection positions and therefore distort the reconstruction result. Furthermore, when scanning objects with specular reflections, such as glossy painted or metalic surfaces, the reflections are often extracted from the camera image as erroneous laser curves. In this paper we investiagte how robust estimates of the parameters of the laser planes can be obtained despite of noisy detections.
Calibration and validation of a general infiltration model
NASA Astrophysics Data System (ADS)
Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.
1999-08-01
A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.
Prototype Stilbene Neutron Collar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, M. K.; Shumaker, D.; Snyderman, N.
2016-10-26
A neutron collar using stilbene organic scintillator cells for fast neutron counting is described for the assay of fresh low enriched uranium (LEU) fuel assemblies. The prototype stilbene collar has a form factor similar to standard He-3 based collars and uses an AmLi interrogation neutron source. This report describes the simulation of list mode neutron correlation data on various fuel assemblies including some with neutron absorbers (burnable Gd poisons). Calibration curves (doubles vs 235U linear mass density) are presented for both thermal and fast (with Cd lining) modes of operation. It is shown that the stilbene collar meets or exceedsmore » the current capabilities of He-3 based neutron collars. A self-consistent assay methodology, uniquely suited to the stilbene collar, using triples is described which complements traditional assay based on doubles calibration curves.« less
Measuring Breath Alcohol Concentrations with an FTIR Spectrometer
NASA Astrophysics Data System (ADS)
Kneisel, Adam; Bellamy, Michael K.
2003-12-01
An FTIR spectrometer equipped with a long-path gas cell can be used to measure breath alcohol concentrations in an instrumental analysis laboratory course. Students use aqueous ethanol solutions to make a calibration curve that relates absorbance signals of breath samples with blood alcohol concentrations. Students use their calibration curve to determine the time needed for their calculated blood alcohol levels to drop below the legal limit following use of a commercial mouthwash. They also calculate their blood alcohol levels immediately after chewing bread. The main goal of the experiment is to provide the students with an interesting laboratory exercise that teaches them about infrared spectrometers. While the results are meant to be only semiquantitative, they have compared well with results from other published studies. A reference is included that describes how to fabricate a long-path gas cell.
NASA Astrophysics Data System (ADS)
Wallis, Eric; Griffin, Todd M.; Popkie, Norm, Jr.; Eagan, Michael A.; McAtee, Robert F.; Vrazel, Danet; McKinly, Jim
2005-05-01
Ion Mobility Spectroscopy (IMS) is the most widespread detection technique in use by the military for the detection of chemical warfare agents, explosives, and other threat agents. Moreover, its role in homeland security and force protection has expanded due, in part, to its good sensitivity, low power, lightweight, and reasonable cost. With the increased use of IMS systems as continuous monitors, it becomes necessary to develop tools and methodologies to ensure optimal performance over a wide range of conditions and extended periods of time. Namely, instrument calibration is needed to ensure proper sensitivity and to correct for matrix or environmental effects. We have developed methodologies to deal with the semi-quantitative nature of IMS and allow us to generate response curves that allow a gauge of instrument performance and maintenance requirements. This instrumentation communicates to the IMS systems via a software interface that was developed in-house. The software measures system response, logs information to a database, and generates the response curves. This paper will discuss the instrumentation, software, data collected, and initial results from fielded systems.
Annually resolved atmospheric radiocarbon records reconstructed from tree-rings
NASA Astrophysics Data System (ADS)
Wacker, Lukas; Bleicher, Niels; Büntgen, Ulf; Friedrich, Michael; Friedrich, Ronny; Diego Galván, Juan; Hajdas, Irka; Jull, Anthony John; Kromer, Bernd; Miyake, Fusa; Nievergelt, Daniel; Reinig, Frederick; Sookdeo, Adam; Synal, Hans-Arno; Tegel, Willy; Wesphal, Torsten
2017-04-01
The IntCal13 calibration curve is mainly based on data measured by decay counting with a resolution of 10 years. Thus high frequency changes like the 11-year solar cycles or cosmic ray events [1] are not visible, or at least not to their full extent. New accelerator mass spectrometry (AMS) systems today are capable of measuring at least as precisely as decay counters [2], with the advantage of using 1000 times less material. The low amount of material required enables more efficient sample preparation. Thus, an annually resolved re-measurement of the tree-ring based calibration curve can now be envisioned. We will demonstrate with several examples the multitude of benefits resulting from annually resolved radiocarbon records from tree-rings. They will not only allow for more precise radiocarbon dating but also contain valuable new astrophysical information. The examples shown will additionally indicate that it can be critical to compare AMS measurements with a calibration curve that is mainly based on decay counting. We often see small offsets between the two measurement techniques, while the reason is yet unknown. [1] Miyake F, Nagaya K, Masuda K, Nakamura T. 2012. A signature of cosmic-ray increase in AD 774-775 from tree rings in Japan. Nature 486(7402):240-2. [2] Wacker L, Bonani G, Friedrich M, Hajdas I, Kromer B, Nemec M, Ruff M, Suter M, Synal H-A, Vockenhuber C. 2010. MICADAS: Routine and high-precision radiocarbon dating. Radiocarbon 52(2):252-62.
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.
Shen, P; Zhao, J; Sun, G; Chen, N; Zhang, X; Gui, H; Yang, Y; Liu, J; Shu, K; Wang, Z; Zeng, H
2017-05-01
The aim of this study was to develop nomograms for predicting prostate cancer and its zonal location using prostate-specific antigen density, prostate volume, and their zone-adjusted derivatives. A total of 928 consecutive patients with prostate-specific antigen (PSA) less than 20.0 ng/mL, who underwent transrectal ultrasound-guided transperineal 12-core prostate biopsy at West China Hospital between 2011 and 2014, were retrospectively enrolled. The patients were randomly split into training cohort (70%, n = 650) and validation cohort (30%, n = 278). Predicting models and the associated nomograms were built using the training cohort, while the validations of the models were conducted using the validation cohort. Univariate and multivariate logistic regression was performed. Then, new nomograms were generated based on multivariate regression coefficients. The discrimination power and calibration of these nomograms were validated using the area under the ROC curve (AUC) and the calibration curve. The potential clinical effects of these models were also tested using decision curve analysis. In total, 285 (30.7%) patients were diagnosed with prostate cancer. Among them, 131 (14.1%) and 269 (29.0%) had transition zone prostate cancer and peripheral zone prostate cancer. Each of zone-adjusted derivatives-based nomogram had an AUC more than 0.75. All nomograms had higher calibration and much better net benefit than the scenarios in predicting patients with or without different zones prostate cancer. Prostate-specific antigen density, prostate volume, and their zone-adjusted derivatives have important roles in detecting prostate cancer and its zonal location for patients with PSA 2.5-20.0 ng/mL. To the best of our knowledge, this is the first nomogram using these parameters to predict outcomes of 12-core prostate biopsy. These instruments can help clinicians to increase the accuracy of prostate cancer screening and to avoid unnecessary prostate biopsy. © 2017 American Society of Andrology and European Academy of Andrology.
NASA Astrophysics Data System (ADS)
Qi, Pan; Shao, Wenbin; Liao, Shusheng
2016-02-01
For quantitative defects detection research on heat transfer tube in nuclear power plants (NPP), two parts of work are carried out based on the crack as the main research objects. (1) Production optimization of calibration tube. Firstly, ASME, RSEM and homemade crack calibration tubes are applied to quantitatively analyze the defects depth on other designed crack test tubes, and then the judgment with quantitative results under crack calibration tube with more accuracy is given. Base on that, weight analysis of influence factors for crack depth quantitative test such as crack orientation, length, volume and so on can be undertaken, which will optimize manufacture technology of calibration tubes. (2) Quantitative optimization of crack depth. Neural network model with multi-calibration curve adopted to optimize natural crack test depth generated in in-service tubes shows preliminary ability to improve quantitative accuracy.
gPhoton: Time-tagged GALEX photon events analysis tools
NASA Astrophysics Data System (ADS)
Million, Chase C.; Fleming, S. W.; Shiao, B.; Loyd, P.; Seibert, M.; Smith, M.
2016-03-01
Written in Python, gPhoton calibrates and sky-projects the ~1.1 trillion ultraviolet photon events detected by the microchannel plates on the Galaxy Evolution Explorer Spacecraft (GALEX), archives these events in a publicly accessible database at the Mikulski Archive for Space Telescopes (MAST), and provides tools for working with the database to extract scientific results, particularly over short time domains. The software includes a re-implementation of core functionality of the GALEX mission calibration pipeline to produce photon list files from raw spacecraft data as well as a suite of command line tools to generate calibrated light curves, images, and movies from the MAST database.
Cai, Tommaso; Mazzoli, Sandra; Migno, Serena; Malossini, Gianni; Lanzafame, Paolo; Mereu, Liliana; Tateo, Saverio; Wagenlehner, Florian M E; Pickard, Robert S; Bartoletti, Riccardo
2014-09-01
To develop and externally validate a novel nomogram predicting recurrence risk probability at 12 months in women after an episode of urinary tract infection. The study included 768 women from Santa Maria Annunziata Hospital, Florence, Italy, affected by urinary tract infections from January 2005 to December 2009. Another 373 women with the same criteria enrolled at Santa Chiara Hospital, Trento, Italy, from January 2010 to June 2012 were used to externally validate and calibrate the nomogram. Univariate and multivariate Cox regression models tested the relationship between urinary tract infection recurrence risk, and patient clinical and laboratory characteristics. The nomogram was evaluated by calculating concordance probabilities, as well as testing calibration of predicted urinary tract infection recurrence with observed urinary tract infections. Nomogram variables included: number of partners, bowel function, type of pathogens isolated (Gram-positive/negative), hormonal status, number of previous urinary tract infection recurrences and previous treatment of asymptomatic bacteriuria. Of the original development data, 261 out of 768 women presented at least one episode of recurrence of urinary tract infection (33.9%). The nomogram had a concordance index of 0.85. The nomogram predictions were well calibrated. This model showed high discrimination accuracy and favorable calibration characteristics. In the validation group (373 women), the overall c-index was 0.83 (P = 0.003, 95% confidence interval 0.51-0.99), whereas the area under the receiver operating characteristic curve was 0.85 (95% confidence interval 0.79-0.91). The present nomogram accurately predicts the recurrence risk of urinary tract infection at 12 months, and can assist in identifying women at high risk of symptomatic recurrence that can be suitable candidates for a prophylactic strategy. © 2014 The Japanese Urological Association.
Development of a prognostic model for predicting spontaneous singleton preterm birth.
Schaaf, Jelle M; Ravelli, Anita C J; Mol, Ben Willem J; Abu-Hanna, Ameen
2012-10-01
To develop and validate a prognostic model for prediction of spontaneous preterm birth. Prospective cohort study using data of the nationwide perinatal registry in The Netherlands. We studied 1,524,058 singleton pregnancies between 1999 and 2007. We developed a multiple logistic regression model to estimate the risk of spontaneous preterm birth based on maternal and pregnancy characteristics. We used bootstrapping techniques to internally validate our model. Discrimination (AUC), accuracy (Brier score) and calibration (calibration graphs and Hosmer-Lemeshow C-statistic) were used to assess the model's predictive performance. Our primary outcome measure was spontaneous preterm birth at <37 completed weeks. Spontaneous preterm birth occurred in 57,796 (3.8%) pregnancies. The final model included 13 variables for predicting preterm birth. The predicted probabilities ranged from 0.01 to 0.71 (IQR 0.02-0.04). The model had an area under the receiver operator characteristic curve (AUC) of 0.63 (95% CI 0.63-0.63), the Brier score was 0.04 (95% CI 0.04-0.04) and the Hosmer Lemeshow C-statistic was significant (p<0.0001). The calibration graph showed overprediction at higher values of predicted probability. The positive predictive value was 26% (95% CI 20-33%) for the 0.4 probability cut-off point. The model's discrimination was fair and it had modest calibration. Previous preterm birth, drug abuse and vaginal bleeding in the first half of pregnancy were the most important predictors for spontaneous preterm birth. Although not applicable in clinical practice yet, this model is a next step towards early prediction of spontaneous preterm birth that enables caregivers to start preventive therapy in women at higher risk. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
An Accurate Projector Calibration Method Based on Polynomial Distortion Representation
Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua
2015-01-01
In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247
NASA Astrophysics Data System (ADS)
Eppeldauer, G. P.; Podobedov, V. B.; Cooksey, C. C.
2017-05-01
Calibration of the emitted radiation from UV sources peaking at 365 nm, is necessary to perform the ASTM required 1 mW/cm2 minimum irradiance in certain military material (ships, airplanes etc) tests. These UV "black lights" are applied for crack-recognition using fluorescent liquid penetrant inspection. At present, these nondestructive tests are performed using Hg-lamps. Lack of a proper standard and the different spectral responsivities of the available UV meters cause significant measurement errors even if the same UV-365 source is measured. A pyroelectric radiometer standard with spectrally flat (constant) response in the UV-VIS range has been developed to solve the problem. The response curve of this standard determined from spectral reflectance measurement, is converted into spectral irradiance responsivity with <0.5% (k=2) uncertainty as a result of using an absolute tie point from a Si-trap detector traceable to the primary standard cryogenic radiometer. The flat pyroelectric radiometer standard can be used to perform uniform integrated irradiance measurements from all kinds of UV sources (with different peaks and distributions) without using any source standard. Using this broadband calibration method, yearly spectral calibrations for the reference UV (LED) sources and irradiance meters is not needed. Field UV sources and meters can be calibrated against the pyroelectric radiometer standard for broadband (integrated) irradiance and integrated responsivity. Using the broadband measurement procedure, the UV measurements give uniform results with significantly decreased uncertainties.
Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen
2017-03-03
Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Salem, A A
2007-03-01
A newly developed method for determining three phenoxy acids and one carbamate herbicide in water and soil samples using gas chromatography with mass spectrometric detection is developed. Phenoxy acids are derivatized through a condensation reaction with a suitable aromatic amine. 1,1-Carbonyldiimidazole is used as a condensation reagent. Derivatization conditions are optimized with respect to the amount of analyte, amine, solvent, and derivatization reagent. The optimum derivatization yield is accomplished in acetonitrile. 4-Methoxy aniline is used as a derivatizing agent. Obtained derivatives are stable indefinitely. Enhancement in sensitivity is achieved by using the single-ion monitoring mass spectrometric mode. The effectiveness of the developed method is tested by determining investigated compounds in water and soil samples. Analytes are concentrated from water samples using liquid-phase extraction and solid-phase extraction. Soil samples are extracted using methanol. Detection limits of 1.00, 50.00, 100.00, and 1.00 ng/mL are obtained for 2-(1-methylethoxy)phenyl methylcarbamate (Baygon), 2-(3-chlorophenoxy)-propionic acid (Cloprop), 2,4,5-trichlorophenoxyacetic acid, and 4-(2,4-dichlorophenoxy)butyric acid, respectively. LPE for spiked water samples yields recoveries in the range of 60.6-95.7%, with relative standard deviation (RSD) values of 1.07-7.85% using single component calibration curves. Recoveries of 44.8-275.5%, with RSD values ranging from 1.43% to 8.61% were obtained using a mixed component calibration curves. SPE from water samples and soil samples showed low recoveries. The reason is attributed to the weak sorption capabilities of soil and Al(2)O(3).
Baena-Díez, José Miguel; Subirana, Isaac; Ramos, Rafael; Gómez de la Cámara, Agustín; Elosua, Roberto; Vila, Joan; Marín-Ibáñez, Alejandro; Guembe, María Jesús; Rigo, Fernando; Tormo-Díaz, María José; Moreno-Iribas, Conchi; Cabré, Joan Josep; Segura, Antonio; Lapetra, José; Quesada, Miquel; Medrano, María José; González-Diego, Paulino; Frontera, Guillem; Gavrila, Diana; Ardanaz, Eva; Basora, Josep; García, José María; García-Lareo, Manel; Gutiérrez-Fuentes, José Antonio; Mayoral, Eduardo; Sala, Joan; Dégano, Irene R; Francès, Albert; Castell, Conxa; Grau, María; Marrugat, Jaume
2018-04-01
To assess the validity of the original low-risk SCORE function without and with high-density lipoprotein cholesterol and SCORE calibrated to the Spanish population. Pooled analysis with individual data from 12 Spanish population-based cohort studies. We included 30 919 individuals aged 40 to 64 years with no history of cardiovascular disease at baseline, who were followed up for 10 years for the causes of death included in the SCORE project. The validity of the risk functions was analyzed with the area under the ROC curve (discrimination) and the Hosmer-Lemeshow test (calibration), respectively. Follow-up comprised 286 105 persons/y. Ten-year cardiovascular mortality was 0.6%. The ratio between estimated/observed cases ranged from 9.1, 6.5, and 9.1 in men and 3.3, 1.3, and 1.9 in women with original low-risk SCORE risk function without and with high-density lipoprotein cholesterol and calibrated SCORE, respectively; differences were statistically significant with the Hosmer-Lemeshow test between predicted and observed mortality with SCORE (P < .001 in both sexes and with all functions). The area under the ROC curve with the original SCORE was 0.68 in men and 0.69 in women. All versions of the SCORE functions available in Spain significantly overestimate the cardiovascular mortality observed in the Spanish population. Despite the acceptable discrimination capacity, prediction of the number of fatal cardiovascular events (calibration) was significantly inaccurate. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Calibration of a Fusion Experiment to Investigate the Nuclear Caloric Curve
NASA Astrophysics Data System (ADS)
Keeler, Ashleigh
2017-09-01
In order to investigate the nuclear equation of state (EoS), the relation between two thermodynamic quantities can be examined. The correlation between the temperature and excitation energy of a nucleus, also known as the caloric curve, has been previously observed in peripheral heavy-ion collisions to exhibit a dependence on the neutron-proton asymmetry. To further investigate this result, fusion reactions (78Kr + 12C and 86Kr + 12C) were measured; the beam energy was varied in the range 15-35 MeV/u in order to vary the excitation energy. The light charged particles (LCPs) evaporated from the compound nucleus were measured in the Si-CsI(TI)/PD detector array FAUST (Forward Array Using Silicon Technology). The LCPs carry information about the temperature. The calibration of FAUST will be described in this presentation. The silicon detectors have resistive surfaces in perpendicular directions to allow position measurement of the LCP's to better than 200 um. The resistive nature requires a position-dependent correction to the energy calibration to take full advantage of the energy resolution. The momentum is calculated from the energy of these particles, and their position on the detectors. A parameterized formula based on the Bethe-Bloch equation was used to straighten the particle identification (PID) lines measured with the dE-E technique. The energy calibration of the CsI detectors is based on the silicon detector energy calibration and the PID. A precision slotted mask enables the relative positions of the detectors to be determined. DOE Grant: DE-FG02-93ER40773 and REU Grant: PHY - 1659847.
The effect of tropospheric fluctuations on the accuracy of water vapor radiometry
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1992-01-01
Line-of-sight path delay calibration accuracies of 1 mm are needed to improve both angular and Doppler tracking capabilities. Fluctuations in the refractivity of tropospheric water vapor limit the present accuracies to about 1 nrad for the angular position and to a delay rate of 3x10(exp -13) sec/sec over a 100-sec time interval for Doppler tracking. This article describes progress in evaluating the limitations of the technique of water vapor radiometry at the 1-mm level. The two effects evaluated here are: (1) errors arising from tip-curve calibration of WVR's in the presence of tropospheric fluctuations and (2) errors due to the use of nonzero beamwidths for water vapor radiometer (WVR) horns. The error caused by tropospheric water vapor fluctuations during instrument calibration from a single tip curve is 0.26 percent in the estimated gain for a tip-curve duration of several minutes or less. This gain error causes a 3-mm bias and a 1-mm scale factor error in the estimated path delay at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor column density present in the troposphere during the astrometric observation. The error caused by WVR beam averaging of tropospheric fluctuations is 3 mm at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor (and is proportionally higher for higher water vapor content) for current WVR beamwidths (full width at half maximum of approximately 6 deg). This is a stochastic error (which cannot be calibrated) and which can be reduced to about half of its instantaneous value by time averaging the radio signal over several minutes. The results presented here suggest two improvements to WVR design: first, the gain of the instruments should be stabilized to 4 parts in 10(exp 4) over a calibration period lasting 5 hours, and second, the WVR antenna beamwidth should be reduced to about 0.2 deg. This will reduce the error induced by water vapor fluctuations in the estimated path delays to less than 1 mm for the elevation range from zenith to 6 deg for most observation weather conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y. John
2016-06-15
Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excitedmore » states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy treatment.« less
VizieR Online Data Catalog: SNe II light curves & spectra from the CfA (Hicken+, 2017)
NASA Astrophysics Data System (ADS)
Hicken, M.; Friedman, A. S.; Blondin, S.; Challis, P.; Berlind, P.; Calkins, M.; Esquerdo, G.; Matheson, T.; Modjaz, M.; Rest, A.; Kirshner, R. P.
2018-01-01
Since all of the optical photometry reported here was produced as part of the CfA3 and CfA4 processing campaigns, see Hicken+ (2009, J/ApJ/700/331) and Hicken+ (2012, J/ApJS/200/12) for greater details on the instruments, observations, photometry pipeline, calibration, and host-galaxy subtraction used to create the CfA SN II light curves. (8 data files).
Revised landsat-5 thematic mapper radiometric calibration
Chander, G.; Markham, B.L.; Barsi, J.A.
2007-01-01
Effective April 2, 2007, the radiometric calibration of Landsat-5 (L5) Thematic Mapper (TM) data that are processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) will be updated. The lifetime gain model that was implemented on May 5, 2003, for the reflective bands (1-5, 7) will be replaced by a new lifetime radiometric-calibration curve that is derived from the instrument's response to pseudoinvariant desert sites and from cross calibration with the Landsat-7 (L7) Enhanced TM Plus (ETM+). Although this calibration update applies to all archived and future L5 TM data, the principal improvements in the calibration are for the data acquired during the first eight years of the mission (1984-1991), where the changes in the instrument-gain values are as much as 15%. The radiometric scaling coefficients for bands 1 and 2 for approximately the first eight years of the mission have also been changed. Users will need to apply these new coefficients to convert the calibrated data product digital numbers to radiance. The scaling coefficients for the other bands have not changed.
NASA Astrophysics Data System (ADS)
Siwabessy, P. Justy W.; Tran, Maggie; Picard, Kim; Brooke, Brendan P.; Huang, Zhi; Smit, Neil; Williams, David K.; Nicholas, William A.; Nichol, Scott L.; Atkinson, Ian
2018-06-01
Spatial information on the distribution of seabed substrate types in high use coastal areas is essential to support their effective management and environmental monitoring. For Darwin Harbour, a rapidly developing port in northern Australia, the distribution of hard substrate is poorly documented but known to influence the location and composition of important benthic biological communities (corals, sponges). In this study, we use angular backscatter response curves to model the distribution of hard seabed in the subtidal areas of Darwin Harbour. The angular backscatter response curve data were extracted from multibeam sonar data and analysed against backscatter intensity for sites observed from seabed video to be representative of "hard" seabed. Data from these sites were consolidated into an "average curve", which became a reference curve that was in turn compared to all other angular backscatter response curves using the Kolmogorov-Smirnov goodness-of-fit. The output was used to generate interpolated spatial predictions of the probability of hard seabed ( p-hard) and derived hard seabed parameters for the mapped area of Darwin Harbour. The results agree well with the ground truth data with an overall classification accuracy of 75% and an area under curve measure of 0.79, and with modelled bed shear stress for the Harbour. Limitations of this technique are discussed with attention to discrepancies between the video and acoustic results, such as in areas where sediment forms a veneer over hard substrate.
Quantitative X-ray diffraction and fluorescence analysis of paint pigment systems : final report.
DOT National Transportation Integrated Search
1978-01-01
This study attempted to correlate measured X-ray intensities with concentrations of each member of paint pigment systems, thereby establishing calibration curves for the quantitative analyses of such systems.
NASA Astrophysics Data System (ADS)
de Jesus, Alexandre; Zmozinski, Ariane Vanessa; Damin, Isabel Cristina Ferreira; Silva, Márcia Messias; Vale, Maria Goreti Rodrigues
2012-05-01
In this work, a direct sampling graphite furnace atomic absorption spectrometry method has been developed for the determination of arsenic and cadmium in crude oil samples. The samples were weighed directly on the solid sampling platforms and introduced into the graphite tube for analysis. The chemical modifier used for both analytes was a mixture of 0.1% Pd + 0.06% Mg + 0.06% Triton X-100. Pyrolysis and atomization curves were obtained for both analytes using standards and samples. Calibration curves with aqueous standards could be used for both analytes. The limits of detection obtained were 5.1 μg kg- 1 for arsenic and 0.2 μg kg- 1 for cadmium, calculated for the maximum amount of sample that can be analyzed (8 mg and 10 mg) for arsenic and cadmium, respectively. Relative standard deviations lower than 20% were obtained. For validation purposes, a calibration curve was constructed with the SRM 1634c and aqueous standards for arsenic and the results obtained for several crude oil samples were in agreement according to paired t-test. The result obtained for the determination of arsenic in the SRM against aqueous standards was also in agreement with the certificate value. As there is no crude oil or similar reference material available with a certified value for cadmium, a digestion in an open vessel under reflux using a "cold finger" was adopted for validation purposes. The use of paired t-test showed that the results obtained by direct sampling and digestion were in agreement at a 95% confidence level. Recovery tests were carried out with inorganic and organic standards and the results were between 88% and 109%. The proposed method is simple, fast and reliable, being appropriated for routine analysis.
Prospective comparison of severity scores for predicting mortality in community-acquired pneumonia.
Luque, Sonia; Gea, Joaquim; Saballs, Pere; Ferrández, Olivia; Berenguer, Nuria; Grau, Santiago
2012-06-01
Specific prognostic models for community acquired pneumonia (CAP) to guide treatment decisions have been developed, such us the Pneumonia Severity Index (PSI) and the Confusion, Urea nitrogen, Respiratory rate, Blood pressure and age ≥ 65 years index (CURB-65). Additionally, general models are available such as the Mortality Probability Model (MPM-II). So far, which score performs better in CAP remains controversial. The objective was to compare PSI and CURB-65 and the general model, MPM-II, for predicting 30-day mortality in patients admitted with CAP. Prospective observational study including all consecutive patients hospitalised with a confirmed diagnosis of CAP and treated according to the hospital guidelines. Comparison of the overall discriminatory power of the models was performed by calculating the area under a receiver operator characteristic curve (AUC ROC curve) and calibration through the Goodness-of-fit test. One hundred and fifty two patients were included (mean age 73.0 years; 69.1% male; 75.0% with more than one comorbid condition). Seventy-five percent of the patients were classified as high-risk subjects according to the PSI, versus 61.2% according to the CURB-65. The 30-day mortality rate was 11.8%. All three scores obtained acceptable and similar values of the AUCs of the ROC curve for predicting mortality. Despite all rules showed good calibration, this seemed to be better for CURB-65. CURB-65 also revealed the highest positive likelihood ratio. CURB-65 performs similar to PSI or MPMII for predicting 30-day mortality in patients with CAP. Consequently, this simple model can be regarded as a valid alternative to the more complex rules.
NASA Astrophysics Data System (ADS)
Bhattacharjee, S.; Dhar, S.; Acharyya, S. K.
2017-07-01
The primary and secondary stages of the uniaxial ratcheting curve for the C-Mn steel SA333 have been investigated. Stress controlled uniaxial ratcheting experiments were conducted with different mean stresses and stress amplitudes to obtain curves showing the evolution of ratcheting strain with number of cycles. In stage-I of the ratcheting curve, a large accumulation of ratcheting strain occurs, but at a decreasing rate. In contrast, in stage-II a smaller accumulation of ratcheting strain is found and the ratcheting rate becomes almost constant. Transmission electron microscope observations reveal that no specific dislocation structures are developed during the early stages of ratcheting. Rather, compared with the case of low cycle fatigue, it is observed that sub-cell formation is delayed in the case of ratcheting. The increase in dislocation density as a result of the ratcheting strain is obtained using the Orowan equation. The ratcheting strain is obtained from the shift of the plastic strain memory surface. The dislocation rearrangement is incorporated in a functional form of dislocation density, which is used to calibrate the parameters of a kinematic hardening law. The observations are formulated in a material model, plugged into the ABAQUS finite element (FE) platform as a user material subroutine. Finally the FE-simulated ratcheting curves are compared with the experimental curves.
Almeida, Luciano F; Vale, Maria G R; Dessuy, Morgana B; Silva, Márcia M; Lima, Renato S; Santos, Vagner B; Diniz, Paulo H D; Araújo, Mário C U
2007-10-31
The increasing development of miniaturized flow systems and the continuous monitoring of chemical processes require dramatically simplified and cheap flow schemes and instrumentation with large potential for miniaturization and consequent portability. For these purposes, the development of systems based on flow and batch technologies may be a good alternative. Flow-batch analyzers (FBA) have been successfully applied to implement analytical procedures, such as: titrations, sample pre-treatment, analyte addition and screening analysis. In spite of its favourable characteristics, the previously proposed FBA uses peristaltic pumps to propel the fluids and this kind of propulsion presents high cost and large dimension, making unfeasible its miniaturization and portability. To overcome these drawbacks, a low cost, robust, compact and non-propelled by peristaltic pump FBA is proposed. It makes use of a lab-made piston coupled to a mixing chamber and a step motor controlled by a microcomputer. The piston-propelled FBA (PFBA) was applied for automatic preparation of calibration solutions for manganese determination in mineral waters by electrothermal atomic-absorption spectrometry (ET AAS). Comparing the results obtained with two sets of calibration curves (five by manual and five by PFBA preparations), no significant statistical differences at a 95% confidence level were observed by applying the paired t-test. The standard deviation of manual and PFBA procedures were always smaller than 0.2 and 0.1mugL(-1), respectively. By using PFBA it was possible to prepare about 80 calibration solutions per hour.
Fu, Jian; Li, Chen; Liu, Zhenzhong
2015-10-01
Synchrotron radiation nanoscale computed tomography (SR nano-CT) is a powerful analysis tool and can be used to perform chemical identification, mapping, or speciation of carbon and other elements together with X-ray fluorescence and X-ray absorption near edge structure (XANES) imaging. In practical applications, there are often challenges for SR nano-CT due to the misaligned geometry caused by the sample stage axial vibration. It occurs quite frequently because of experimental constraints from the mechanical error of manufacturing and assembly and the thermal expansion during the time-consuming scanning. The axial vibration will lead to the structure overlap among neighboring layers and degrade imaging results by imposing artifacts into the nano-CT images. It becomes worse for samples with complicated axial structure. In this work, we analyze the influence of axial vibration on nano-CT image by partial derivative. Then, an axial vibration calibration method for SR nano-CT is developed and investigated. It is based on the cross correlation of plane integral curves of the sample at different view angles. This work comprises a numerical study of the method and its experimental verification using a dataset measured with the full-field transmission X-ray microscope nano-CT setup at the beamline 4W1A of the Beijing Synchrotron Radiation Facility. The results demonstrate that the presented method can handle the stage axial vibration. It can work for random axial vibration and needs neither calibration phantom nor additional calibration scanning. It will be helpful for the development and application of synchrotron radiation nano-CT systems.
Taverniers, Isabel; Windels, Pieter; Vaïtilingom, Marc; Milcamps, Anne; Van Bockstaele, Erik; Van den Eede, Guy; De Loose, Marc
2005-04-20
Since the 18th of April 2004, two new regulations, EC/1829/2003 on genetically modified food and feed products and EC/1830/2003 on traceability and labeling of GMOs, are in force in the EU. This new, comprehensive regulatory framework emphasizes the need of an adequate tracing system. Unique identifiers, such as the transgene genome junction region or a specific rearrangement within the transgene DNA, should form the basis of such a tracing system. In this study, we describe the development of event-specific tracing systems for transgenic maize lines Bt11, Bt176, and GA21 and for canola event GT73. Molecular characterization of the transgene loci enabled us to clone an event-specific sequence into a plasmid vector, to be used as a marker, and to develop line-specific primers. Primer specificity was tested through qualitative PCRs and dissociation curve analysis in SYBR Green I real-time PCRs. The primers were then combined with event-specific TaqMan probes in quantitative real-time PCRs. Calibration curves were set up both with genomic DNA samples and the newly synthesized plasmid DNA markers. It is shown that cloned plasmid GMO target sequences are perfectly suitable as unique identifiers and quantitative calibrators. Together with an event-specific primer pair and a highly specific TaqMan probe, the plasmid markers form crucial components of a unique and straighforward tracing system for Bt11, Bt176, and GA21 maize and GT73 canola events.
NASA Astrophysics Data System (ADS)
Qian, Guian; Lei, Wei-Sheng; Niffenegger, M.; González-Albuixech, V. F.
2018-04-01
The work relates to the effect of temperature on the model parameters in local approaches (LAs) to cleavage fracture. According to a recently developed LA model, the physical consensus of plastic deformation being a prerequisite to cleavage fracture enforces any LA model of cleavage fracture to observe initial yielding of a volume element as its threshold stress state to incur cleavage fracture in addition to the conventional practice of confining the fracture process zone within the plastic deformation zone. The physical consistency of the new LA model to the basic LA methodology and the differences between the new LA model and other existing models are interpreted. Then this new LA model is adopted to investigate the temperature dependence of LA model parameters using circumferentially notched round tensile specimens. With the published strength data as input, finite element (FE) calculation is conducted for elastic-perfectly plastic deformation and the realistic elastic-plastic hardening, respectively, to provide stress distributions for model calibration. The calibration results in temperature independent model parameters. This leads to the establishment of a 'master curve' characteristic to synchronise the correlation between the nominal strength and the corresponding cleavage fracture probability at different temperatures. This 'master curve' behaviour is verified by strength data from three different steels, providing a new path to calculate cleavage fracture probability with significantly reduced FE efforts.
Skin microrelief as a diagnostic tool (Conference Presentation)
NASA Astrophysics Data System (ADS)
Tchvialeva, Lioudmila; Phillips, Jamie; Zeng, Haishan; McLean, David; Lui, Harvey; Lee, Tim K.
2017-02-01
Skin surface roughness is an important property for differentiating skin diseases. Recently, roughness has also been identified as a potential diagnostic indicator in the early detection of skin cancer. Objective quantification is usually carried out by creating silicone replicas of the skin and then measuring the replicas. We have developed an alternative in-vivo technique to measure skin roughness based on laser speckle. Laser speckle is the interference pattern produced when coherent light is used to illuminate a rough surface and the backscattered light is imaged. Acquiring speckle contrast measurements from skin phantoms with controllable roughness, we created a calibration curve by linearly interpolating between measured points. This calibration curve accounts for internal scattering and is designed to evaluate skin microrelief whose root-mean-square roughness is in the range of 10-60 micrometers. To validate the effectiveness of our technique, we conducted a study to measure 243 skin lesions including actinic keratosis (8), basal cell carcinoma (24), malignant melanoma (31), nevus (73), squamous cell carcinoma (19), and seborrheic keratosis (79). The average roughness values ranged from 26 to 57 micrometers. Malignant melanoma was ranked as the smoothest and squamous cell carcinoma as the roughest lesion. An ANOVA test confirmed that malignant melanoma has significantly smaller roughness than other lesion types. Our results suggest that skin microrelief can be used to detect malignant melanoma from other skin conditions.
Development of neuraminidase detection using gold nanoparticles boron-doped diamond electrodes.
Wahyuni, Wulan T; Ivandini, Tribidasari A; Saepudin, Endang; Einaga, Yasuaki
2016-03-15
Gold nanoparticles-modified boron-doped diamond (AuNPs-BDD) electrodes, which were prepared with a self-assembly deposition of AuNPs at amine-terminated boron-doped diamond, were examined for voltammetric detection of neuraminidase (NA). The detection method was performed based on the difference of electrochemical responses of zanamivir at gold surface before and after the reaction with NA in phosphate buffer solution (PBS, pH 5.5). A linear calibration curve for zanamivir in 0.1 M PBS in the absence of NA was achieved in the concentration range of 1 × 10(-6) to 1 × 10(-5) M (R(2) = 0.99) with an estimated limit of detection (LOD) of 2.29 × 10(-6) M. Furthermore, using its reaction with 1.00 × 10(-5) M zanamivir, a linear calibration curve of NA can be obtained in the concentration range of 0-12 mU (R(2) = 0.99) with an estimated LOD of 0.12 mU. High reproducibility was shown with a relative standard deviation (RSD) of 1.14% (n = 30). These performances could be maintained when the detection was performed in mucin matrix. Comparison performed using gold-modified BDD (Au-BDD) electrodes suggested that the good performance of the detection method is due to the stability of the gold particles position at the BDD surface. Copyright © 2016 Elsevier Inc. All rights reserved.
Xiu, Junshan; Dong, Lili; Qin, Hua; Liu, Yunyan; Yu, Jin
2016-12-01
The detection limit of trace metals in liquids has been improved greatly by laser-induced breakdown spectroscopy (LIBS) using solid substrate. A paper substrate and a metallic substrate were used as a solid substrate for the detection of trace metals in aqueous solutions and viscous liquids (lubricating oils) respectively. The matrix effect on quantitative analysis of trace metals in two types of liquids was investigated. For trace metals in aqueous solutions using paper substrate, the calibration curves established for pure solutions and mixed solutions samples presented large variation on both the slope and the intercept for the Cu, Cd, and Cr. The matrix effects among the different elements in mixed solutions were observed. However, good agreement was obtained between the measured and known values in real wastewater. For trace metals in lubricating oils, the matrix effect between the different oils is relatively small and reasonably negligible under the conditions of our experiment. A universal calibration curve can be established for trace metals in different types of oils. The two approaches are verified that it is possible to develop a feasible and sensitive method with accuracy results for rapid detection of trace metals in industrial wastewater and viscous liquids by laser-induced breakdown spectroscopy. © The Author(s) 2016.
Smile effect detection for dispersive hypersepctral imager based on the doped reflectance panel
NASA Astrophysics Data System (ADS)
Zhou, Jiankang; Liu, Xiaoli; Ji, Yiqun; Chen, Yuheng; Shen, Weimin
2012-11-01
Hyperspectral imager is now widely used in many regions, such as resource development, environmental monitoring and so on. The reliability of spectral data is based on the instrument calibration. The smile, wavelengths at the center pixels of imaging spectrometer detector array are different from the marginal pixels, is a main factor in the spectral calibration because it can deteriorate the spectral data accuracy. When the spectral resolution is high, little smile can result in obvious signal deviation near weak atmospheric absorption peak. The traditional method of detecting smile is monochromator wavelength scanning which is time consuming and complex and can not be used in the field or at the flying platform. We present a new smile detection method based on the holmium oxide panel which has the rich of absorbed spectral features. The higher spectral resolution spectrometer and the under-test imaging spectrometer acquired the optical signal from the Spectralon panel and the holmium oxide panel respectively. The wavelength absorption peak positions of column pixels are determined by curve fitting method which includes spectral response function sequence model and spectral resampling. The iteration strategy and Pearson coefficient together are used to confirm the correlation between the measured and modeled spectral curve. The present smile detection method is posed on our designed imaging spectrometer and the result shows that it can satisfy precise smile detection requirement of high spectral resolution imaging spectrometer.
The response of Kodak EDR2 film in high-energy electron beams.
Gerbi, Bruce J; Dimitroyannis, Dimitri A
2003-10-01
Kodak XV2 film has been a key dosimeter in radiation therapy for many years. The advantages of the recently introduced Kodak EDR2 film for photon beam dosimetry have been the focus of several IMRT verification dosimetry publications. However, no description of this film's response to electron beams exists in the literature. We initiated a study to characterize the response and utility of this film for electron beam dosimetry. We exposed a series of EDR2 films to 6, 9, 12, 16, and 20 MeV electrons in addition to 6 and 18 MV x rays to develop standard characteristic curves. The linac was first calibrated to ensure that the delivered dose was known accurately. All irradiations were done at dmax in polystyrene for both photons and electrons, all films were from the same batch, and were developed at the same time. We also exposed the EDR2 films in a solid water phantom to produce central axis depth dose curves. These data were compared against percent depth dose curves measured in a water phantom using an IC-10 ion chamber, Kodak XV2 film, and a PTW electron diode. The response of this film was the same for both 6 and 18 MV x rays, but showed an apparent energy-dependent enhancement for electron beams. The response of the film also increased with increasing electron energy. This caused the percent depth dose curves using film to be shifted toward the surface compared to the ion chamber data.
NASA Astrophysics Data System (ADS)
Zafar, Sufi; Lu, Minhua; Jagtiani, Ashish
2017-01-01
Field effect transistors (FET) have been widely used as transducers in electrochemical sensors for over 40 years. In this report, a FET transducer is compared with the recently proposed bipolar junction transistor (BJT) transducer. Measurements are performed on two chloride electrochemical sensors that are identical in all details except for the transducer device type. Comparative measurements show that the transducer choice significantly impacts the electrochemical sensor characteristics. Signal to noise ratio is 20 to 2 times greater for the BJT sensor. Sensitivity is also enhanced: BJT sensing signal changes by 10 times per pCl, whereas the FET signal changes by 8 or less times. Also, sensor calibration curves are impacted by the transducer choice. Unlike a FET sensor, the calibration curve of the BJT sensor is independent of applied voltages. Hence, a BJT sensor can make quantitative sensing measurements with minimal calibration requirements, an important characteristic for mobile sensing applications. As a demonstration for mobile applications, these BJT sensors are further investigated by measuring chloride levels in artificial human sweat for potential cystic fibrosis diagnostic use. In summary, the BJT device is demonstrated to be a superior transducer in comparison to a FET in an electrochemical sensor.
The role of a microDiamond detector in the dosimetry of proton pencil beams.
Gomà, Carles; Marinelli, Marco; Safai, Sairos; Verona-Rinati, Gianluca; Würfel, Jan
2016-03-01
In this work, the performance of a microDiamond detector in a scanned proton beam is studied and its potential role in the dosimetric characterization of proton pencil beams is assessed. The linearity of the detector response with the absorbed dose and the dependence on the dose-rate were tested. The depth-dose curve and the lateral dose profiles of a proton pencil beam were measured and compared to reference data. The feasibility of calibrating the beam monitor chamber with a microDiamond detector was also studied. It was found the detector reading is linear with the absorbed dose to water (down to few cGy) and the detector response is independent of both the dose-rate (up to few Gy/s) and the proton beam energy (within the whole clinically-relevant energy range). The detector showed a good performance in depth-dose curve and lateral dose profile measurements; and it might even be used to calibrate the beam monitor chambers-provided it is cross-calibrated against a reference ionization chamber. In conclusion, the microDiamond detector was proved capable of performing an accurate dosimetric characterization of proton pencil beams. Copyright © 2015. Published by Elsevier GmbH.
Zafar, Sufi; Lu, Minhua; Jagtiani, Ashish
2017-01-01
Field effect transistors (FET) have been widely used as transducers in electrochemical sensors for over 40 years. In this report, a FET transducer is compared with the recently proposed bipolar junction transistor (BJT) transducer. Measurements are performed on two chloride electrochemical sensors that are identical in all details except for the transducer device type. Comparative measurements show that the transducer choice significantly impacts the electrochemical sensor characteristics. Signal to noise ratio is 20 to 2 times greater for the BJT sensor. Sensitivity is also enhanced: BJT sensing signal changes by 10 times per pCl, whereas the FET signal changes by 8 or less times. Also, sensor calibration curves are impacted by the transducer choice. Unlike a FET sensor, the calibration curve of the BJT sensor is independent of applied voltages. Hence, a BJT sensor can make quantitative sensing measurements with minimal calibration requirements, an important characteristic for mobile sensing applications. As a demonstration for mobile applications, these BJT sensors are further investigated by measuring chloride levels in artificial human sweat for potential cystic fibrosis diagnostic use. In summary, the BJT device is demonstrated to be a superior transducer in comparison to a FET in an electrochemical sensor. PMID:28134275
Lanfear, David E; Levy, Wayne C; Stehlik, Josef; Estep, Jerry D; Rogers, Joseph G; Shah, Keyur B; Boyle, Andrew J; Chuang, Joyce; Farrar, David J; Starling, Randall C
2017-05-01
Timing of left ventricular assist device (LVAD) implantation in advanced heart failure patients not on inotropes is unclear. Relevant prediction models exist (SHFM [Seattle Heart Failure Model] and HMRS [HeartMate II Risk Score]), but use in this group is not established. ROADMAP (Risk Assessment and Comparative Effectiveness of Left Ventricular Assist Device and Medical Management in Ambulatory Heart Failure Patients) is a prospective, multicenter, nonrandomized study of 200 advanced heart failure patients not on inotropes who met indications for LVAD implantation, comparing the effectiveness of HeartMate II support versus optimal medical management. We compared SHFM-predicted versus observed survival (overall survival and LVAD-free survival) in the optimal medical management arm (n=103) and HMRS-predicted versus observed survival in all LVAD patients (n=111) using Cox modeling, receiver-operator characteristic (ROC) curves, and calibration plots. In the optimal medical management cohort, the SHFM was a significant predictor of survival (hazard ratio=2.98; P <0.001; ROC area under the curve=0.71; P <0.001) but not LVAD-free survival (hazard ratio=1.41; P =0.097; ROC area under the curve=0.56; P =0.314). SHFM showed adequate calibration for survival but overestimated LVAD-free survival. In the LVAD cohort, the HMRS had marginal discrimination at 3 (Cox P =0.23; ROC area under the curve=0.71; P =0.026) and 12 months (Cox P =0.036; ROC area under the curve=0.62; P =0.122), but calibration was poor, underestimating survival across time and risk subgroups. In non-inotrope-dependent advanced heart failure patients receiving optimal medical management, the SHFM was predictive of overall survival but underestimated the risk of clinical worsening and LVAD implantation. Among LVAD patients, the HMRS had marginal discrimination and underestimated survival post-LVAD implantation. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01452802. © 2017 American Heart Association, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, DA; Sahoo, N; Sawakuchi, GO
Purpose: To investigate the use of optically stimulated luminescence (OSL) detectors (OSLDs) for measurements of dose-averaged linear energy transfer (LET) in patient-specific proton therapy treatment fields. Methods: We used Al{sub 2}O{sub 3}:C OSLDs made from the same material as commercially available nanoDot OSLDs from Landauer, Inc. We calibrated two parameters of the OSL signal as functions of LET in therapeutic proton beams: the ratio of the ultraviolet and blue emission intensities (UV/blue ratio) and the OSL curve shape. These calibration curves were created by irradiating OSLDs in passively scattered beams of known LET (0.96 to 3.91 keV/µm). The LET valuesmore » were determined using a validated Monte Carlo model of the beamline. We then irradiated new OSLDs with the prescription dose (16 to 74 cGy absorbed dose to water) at the center of the spread-out Bragg peak (SOBP) of four patient-specific treatment fields. From readouts of these OSLDs, we determined both the UV/blue ratio and OSL curve shape parameters. Combining these parameters with the calibration curves, we were able to measure LET using the OSLDs. The measurements were compared to the theoretical LET values obtained from Monte Carlo simulations of the patient-specific treatments fields. Results: Using the UV/blue ratio parameter, we were able to measure LET within 3.8%, 6.2%, 5.6% and 8.6% of the Monte Carlo value for each of the patient fields. Similarly, using the OSL curve shape parameter, LET measurements agreed within 0.5%, 11.0%, 2.5% and 7.6% for each of the four fields. Conclusion: We have demonstrated a method to verify LET in patient-specific proton therapy treatment fields using OSLDs. The possibility of enhancing biological effectiveness of proton therapy treatment plans by including LET in the optimization has been previously shown. The LET verification method we have demonstrated will be useful in the quality assurance of such LET optimized treatment plans. DA Granville received financial support from the Natural Sciences and Engineering Research Council of Canada.« less
A distance-independent calibration of the luminosity of type Ia supernovae and the Hubble constant
NASA Technical Reports Server (NTRS)
Leibundgut, Bruno; Pinto, Philip A.
1992-01-01
The absolute magnitude of SNe Ia at maximum is calibrated here using radioactive decay models for the light curve and a minimum of assumptions. The absolute magnitude parameter space is studied using explosion models and a range of rise times, and absolute B magnitudes at maximum are used to derive a range of the H0 and the distance to the Virgo Cluster from SNe Ia. Rigorous limits for H0 of 45 and 105 km/s/Mpc are derived.
The Calibration of the Slotted Section for Precision Microwave Measurements
1952-03-01
Calibration Curve for lossless Structures B. The Correction Relations for Dis’sipative Structures C The Effect of an Error in the Variable Short...a’discussipn of protoe effects ? and a methpd of correction? for large insertion depths are given in the literature-* xhrs. reppirt is _ cpnceraed...solely with error source fcp)v *w w«v 3Jhe: presence of the slot in the slptted section Intro dub« effects ? fa)" the slot, loads the vmyeguide
Fast and accurate enzyme activity measurements using a chip-based microfluidic calorimeter.
van Schie, Morten M C H; Ebrahimi, Kourosh Honarmand; Hagen, Wilfred R; Hagedoorn, Peter-Leon
2018-03-01
Recent developments in microfluidic and nanofluidic technologies have resulted in development of new chip-based microfluidic calorimeters with potential use in different fields. One application would be the accurate high-throughput measurement of enzyme activity. Calorimetry is a generic way to measure activity of enzymes, but unlike conventional calorimeters, chip-based calorimeters can be easily automated and implemented in high-throughput screening platforms. However, application of chip-based microfluidic calorimeters to measure enzyme activity has been limited due to problems associated with miniaturization such as incomplete mixing and a decrease in volumetric heat generated. To address these problems we introduced a calibration method and devised a convenient protocol for using a chip-based microfluidic calorimeter. Using the new calibration method, the progress curve of alkaline phosphatase, which has product inhibition for phosphate, measured by the calorimeter was the same as that recorded by UV-visible spectroscopy. Our results may enable use of current chip-based microfluidic calorimeters in a simple manner as a tool for high-throughput screening of enzyme activity with potential applications in drug discovery and enzyme engineering. Copyright © 2017. Published by Elsevier Inc.
Hu, B.X.; He, C.
2008-01-01
An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2016-05-01
The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.
Magnetic nanoparticle temperature estimation.
Weaver, John B; Rauwerdink, Adam M; Hansen, Eric W
2009-05-01
The authors present a method of measuring the temperature of magnetic nanoparticles that can be adapted to provide in vivo temperature maps. Many of the minimally invasive therapies that promise to reduce health care costs and improve patient outcomes heat tissue to very specific temperatures to be effective. Measurements are required because physiological cooling, primarily blood flow, makes the temperature difficult to predict a priori. The ratio of the fifth and third harmonics of the magnetization generated by magnetic nanoparticles in a sinusoidal field is used to generate a calibration curve and to subsequently estimate the temperature. The calibration curve is obtained by varying the amplitude of the sinusoidal field. The temperature can then be estimated from any subsequent measurement of the ratio. The accuracy was 0.3 degree K between 20 and 50 degrees C using the current apparatus and half-second measurements. The method is independent of nanoparticle concentration and nanoparticle size distribution.
Chandra Observations of SN 1987A: The Soft X-Ray Light Curve Revisited
NASA Technical Reports Server (NTRS)
Helder, E. A.; Broos, P. S.; Dewey, D.; Dwek, E.; McCray, R.; Park, S.; Racusin, J. L.; Zhekov, S. A.; Burrows, D. N.
2013-01-01
We report on the present stage of SN 1987A as observed by the Chandra X-Ray Observatory. We reanalyze published Chandra observations and add three more epochs of Chandra data to get a consistent picture of the evolution of the X-ray fluxes in several energy bands. We discuss the implications of several calibration issues for Chandra data. Using the most recent Chandra calibration files, we find that the 0.5-2.0 keV band fluxes of SN 1987A have increased by approximately 6 x 10(exp-13) erg s(exp-1)cm(exp-2) per year since 2009. This is in contrast with our previous result that the 0.5-2.0 keV light curve showed a sudden flattening in 2009. Based on our new analysis, we conclude that the forward shock is still in full interaction with the equatorial ring.
Response of TLD-100 in mixed fields of photons and electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawless, Michael J.; Junell, Stephanie; Hammer, Cliff
Purpose: Thermoluminescent dosimeters (TLDs) are routinely used for dosimetric measurements of high energy photon and electron fields. However, TLD response in combined fields of photon and electron beam qualities has not been characterized. This work investigates the response of TLD-100 (LiF:Mg,Ti) to sequential irradiation by high-energy photon and electron beam qualities. Methods: TLDs were irradiated to a known dose by a linear accelerator with a 6 MV photon beam, a 6 MeV electron beam, and a NIST-traceable {sup 60}Co beam. TLDs were also irradiated in a mixed field of the 6 MeV electron beam and the 6 MV photon beam.more » The average TLD response per unit dose of the TLDs for each linac beam quality was normalized to the average response per unit dose of the TLDs irradiated by the {sup 60}Co beam. Irradiations were performed in water and in a Virtual Water Trade-Mark-Sign phantom. The 6 MV photon beam and 6 MeV electron beam were used to create dose calibration curves relating TLD response to absorbed dose to water, which were applied to the TLDs irradiated in the mixed field. Results: TLD relative response per unit dose in the mixed field was less sensitive than the relative response in the photon field and more sensitive than the relative response in the electron field. Application of the photon dose calibration curve to the TLDs irradiated in a mixed field resulted in an underestimation of the delivered dose, while application of the electron dose calibration curve resulted in an overestimation of the dose. Conclusions: The relative response of TLD-100 in mixed fields fell between the relative response in the photon-only and electron-only fields. TLD-100 dosimetry of mixed fields must account for this intermediate response to minimize the estimation errors associated with calibration factors obtained from a single beam quality.« less
Response of TLD-100 in mixed fields of photons and electrons.
Lawless, Michael J; Junell, Stephanie; Hammer, Cliff; DeWerd, Larry A
2013-01-01
Thermoluminescent dosimeters (TLDs) are routinely used for dosimetric measurements of high energy photon and electron fields. However, TLD response in combined fields of photon and electron beam qualities has not been characterized. This work investigates the response of TLD-100 (LiF:Mg,Ti) to sequential irradiation by high-energy photon and electron beam qualities. TLDs were irradiated to a known dose by a linear accelerator with a 6 MV photon beam, a 6 MeV electron beam, and a NIST-traceable (60)Co beam. TLDs were also irradiated in a mixed field of the 6 MeV electron beam and the 6 MV photon beam. The average TLD response per unit dose of the TLDs for each linac beam quality was normalized to the average response per unit dose of the TLDs irradiated by the (60)Co beam. Irradiations were performed in water and in a Virtual Water™ phantom. The 6 MV photon beam and 6 MeV electron beam were used to create dose calibration curves relating TLD response to absorbed dose to water, which were applied to the TLDs irradiated in the mixed field. TLD relative response per unit dose in the mixed field was less sensitive than the relative response in the photon field and more sensitive than the relative response in the electron field. Application of the photon dose calibration curve to the TLDs irradiated in a mixed field resulted in an underestimation of the delivered dose, while application of the electron dose calibration curve resulted in an overestimation of the dose. The relative response of TLD-100 in mixed fields fell between the relative response in the photon-only and electron-only fields. TLD-100 dosimetry of mixed fields must account for this intermediate response to minimize the estimation errors associated with calibration factors obtained from a single beam quality.
LAMOST Spectrograph Response Curves: Stability and Application to Flux Calibration
NASA Astrophysics Data System (ADS)
Du, Bing; Luo, A.-Li; Kong, Xiao; Zhang, Jian-Nan; Guo, Yan-Xin; Cook, Neil James; Hou, Wen; Yang, Hai-Feng; Li, Yin-Bi; Song, Yi-Han; Chen, Jian-Jun; Zuo, Fang; Wu, Ke-Fei; Wang, Meng-Xin; Wu, Yue; Wang, You-Fen; Zhao, Yong-Heng
2016-12-01
The task of flux calibration for Large sky Area Multi-Object Spectroscopic Telescope (LAMOST) spectra is difficult due to many factors, such as the lack of standard stars, flat-fielding for large field of view, and variation of reddening between different stars, especially at low Galactic latitudes. Poor selection, bad spectral quality, or extinction uncertainty of standard stars not only might induce errors to the calculated spectral response curve (SRC) but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude | b| ≥slant 60^\\circ and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee that the selected stars had been observed by each fiber, we selected 37,931 high-quality exposures of 29,000 stars from LAMOST DR2, and more than seven exposures for each fiber. We calculated the SRCs for each fiber for each exposure and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable, with statistical errors ≤10%. From the comparison between each ASPSRC and the SRCs for the same spectrograph obtained by the 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra that were abandoned by the LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with the Sloan Digital Sky Survey (SDSS), the relative flux differences between SDSS spectra and LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration.
Brunet, Bertrand R.; Barnes, Allan J.; Scheidweiler, Karl B.; Mura, Patrick
2009-01-01
A sensitive and specific method is presented to simultaneously quantify methadone, heroin, cocaine and metabolites in sweat. Drugs were eluted from sweat patches with sodium acetate buffer, followed by SPE and quantification by GC/MS with electron impact ionization and selected ion monitoring. Daily calibration for anhydroecgonine methyl ester, ecgonine methyl ester, cocaine, benzoylecgonine (BE), codeine, morphine, 6-acetylcodeine, 6-acetylmorphine (6AM), heroin (5–1000 ng/patch) and methadone (10–1000 ng/patch) achieved determination coefficients of >0.995, and calibrators quantified to within ±20% of the target concentrations. Extended calibration curves (1000–10,000 ng/patch) were constructed for methadone, cocaine, BE and 6AM by modifying injection techniques. Within (N=5) and between-run (N=20) imprecisions were calculated at six control levels across the dynamic ranges with coefficients of variation of <6.5%. Accuracies at these concentrations were ±11.9% of target. Heroin hydrolysis during specimen processing was <11%. This novel assay offers effective monitoring of drug exposure during drug treatment, workplace and criminal justice monitoring programs. PMID:18607576
Meteor44 Video Meteor Photometry
NASA Technical Reports Server (NTRS)
Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.
2004-01-01
Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.
NASA Astrophysics Data System (ADS)
Saizu, Mirela Angela
2016-09-01
The developments of high-purity germanium detectors match very well the requirements of the in-vivo human body measurements regarding the gamma energy ranges of the radionuclides intended to be measured, the shape of the extended radioactive sources, and the measurement geometries. The Whole Body Counter (WBC) from IFIN-HH is based on an “over-square” high-purity germanium detector (HPGe) to perform accurate measurements of the incorporated radionuclides emitting X and gamma rays in the energy range of 10 keV-1500 keV, under conditions of good shielding, suitable collimation, and calibration. As an alternative to the experimental efficiency calibration method consisting of using reference calibration sources with gamma energy lines that cover all the considered energy range, it is proposed to use the Monte Carlo method for the efficiency calibration of the WBC using the radiation transport code MCNP5. The HPGe detector was modelled and the gamma energy lines of 241Am, 57Co, 133Ba, 137Cs, 60Co, and 152Eu were simulated in order to obtain the virtual efficiency calibration curve of the WBC. The Monte Carlo method was validated by comparing the simulated results with the experimental measurements using point-like sources. For their optimum matching, the impact of the variation of the front dead layer thickness and of the detector photon absorbing layers materials on the HPGe detector efficiency was studied, and the detector’s model was refined. In order to perform the WBC efficiency calibration for realistic people monitoring, more numerical calculations were generated simulating extended sources of specific shape according to the standard man characteristics.
Calibration and validation of TRUST MRI for the estimation of cerebral blood oxygenation
Lu, Hanzhang; Xu, Feng; Grgac, Ksenija; Liu, Peiying; Qin, Qin; van Zijl, Peter
2011-01-01
Recently, a T2-Relaxation-Under-Spin-Tagging (TRUST) MRI technique was developed to quantitatively estimate blood oxygen saturation fraction (Y) via the measurement of pure blood T2. This technique has shown promise for normalization of fMRI signals, for the assessment of oxygen metabolism, and in studies of cognitive aging and multiple sclerosis. However, a human validation study has not been conducted. In addition, the calibration curve used to convert blood T2 to Y has not accounted for the effects of hematocrit (Hct). In the present study, we first conducted experiments on blood samples under physiologic conditions, and the Carr-Purcell-Meiboom-Gill (CPMG) T2 was determined for a range of Y and Hct values. The data were fitted to a two-compartment exchange model to allow the characterization of a three-dimensional plot that can serve to calibrate the in vivo data. Next, in a validation study in humans, we showed that arterial Y estimated using TRUST MRI was 0.837±0.036 (N=7) during the inhalation of 14% O2, which was in excellent agreement with the gold-standard Y values of 0.840±0.036 based on Pulse-Oximetry. These data suggest that the availability of this calibration plot should enhance the applicability of TRUST MRI for non-invasive assessment of cerebral blood oxygenation. PMID:21590721
Jaramillo, Hector E; Gómez, Lessby; García, Jose J
2015-01-01
With the aim to study disc degeneration and the risk of injury during occupational activities, a new finite element (FE) model of the L4-L5-S1 segment of the human spine was developed based on the anthropometry of a typical Colombian worker. Beginning with medical images, the programs CATIA and SOLIDWORKS were used to generate and assemble the vertebrae and create the soft structures of the segment. The software ABAQUS was used to run the analyses, which included a detailed model calibration using the experimental step-wise reduction data for the L4-L5 component, while the L5-S1 segment was calibrated in the intact condition. The range of motion curves, the intradiscal pressure and the lateral bulging under pure moments were considered for the calibration. As opposed to other FE models that include the L5-S1 disc, the model developed in this study considered the regional variations and anisotropy of the annulus as well as a realistic description of the nucleus geometry, which allowed an improved representation of experimental data during the validation process. Hence, the model can be used to analyze the stress and strain distributions in the L4-L5 and L5-S1 discs of workers performing activities such as lifting and carrying tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganeshalingam, Mohan; Li Weidong; Filippenko, Alexei V.
We present BVRI light curves of 165 Type Ia supernovae (SNe Ia) from the Lick Observatory Supernova Search follow-up photometry program from 1998 through 2008. Our light curves are typically well sampled (cadence of 3-4 days) with an average of 21 photometry epochs. We describe our monitoring campaign and the photometry reduction pipeline that we have developed. Comparing our data set to that of Hicken et al., with which we have 69 overlapping supernovae (SNe), we find that as an ensemble the photometry is consistent, with only small overall systematic differences, although individual SNe may differ by as much asmore » 0.1 mag, and occasionally even more. Such disagreement in specific cases can have significant implications for combining future large data sets. We present an analysis of our light curves which includes template fits of light-curve shape parameters useful for calibrating SNe Ia as distance indicators. Assuming the B - V color of SNe Ia at 35 days past maximum light can be presented as the convolution of an intrinsic Gaussian component and a decaying exponential attributed to host-galaxy reddening, we derive an intrinsic scatter of {sigma} = 0.076 {+-} 0.019 mag, consistent with the Lira-Phillips law. This is the first of two papers, the second of which will present a cosmological analysis of the data presented herein.« less
NASA Astrophysics Data System (ADS)
Chris, Leong; Yoshiyuki, Yokoo
2017-04-01
Islands that are concentrated in developing countries have poor hydrological research data which contribute to stress on hydrological resources due to unmonitored human influence and negligence. As studies in islands are relatively young, there is a need to understand these stresses and influences by building block research specifically targeting islands. The flow duration curve (FDC) is a simple start up hydrological tool that can be used in initial studies of islands. This study disaggregates the FDC into three sections, top, middle and bottom and in each section runoff is estimated with simple hydrological models. The study is based on Hawaiian Islands, toward estimating runoff in ungauged island catchments in the humid tropics. Runoff estimations in the top and middle sections include using the Curve Number (CN) method and the Regime Curve (RC) respectively. The bottom section is presented as a separate study from this one. The results showed that for majority of the catchments the RC can be used for estimations in the middle section of the FDC. It also showed that in order for the CN method to make stable estimations, it had to be calibrated. This study identifies simple methodologies that can be useful for making runoff estimations in ungauged island catchments.
Zhang, Zhongheng; Hong, Yucai
2017-07-25
There are several disease severity scores being used for the prediction of mortality in critically ill patients. However, none of them was developed and validated specifically for patients with severe sepsis. The present study aimed to develop a novel prediction score for severe sepsis. A total of 3206 patients with severe sepsis were enrolled, including 1054 non-survivors and 2152 survivors. The LASSO score showed the best discrimination (area under curve: 0.772; 95% confidence interval: 0.735-0.810) in the validation cohort as compared with other scores such as simplified acute physiology score II, acute physiological score III, Logistic organ dysfunction system, sequential organ failure assessment score, and Oxford Acute Severity of Illness Score. The calibration slope was 0.889 and Brier value was 0.173. The study employed a single center database called Medical Information Mart for Intensive Care-III) MIMIC-III for analysis. Severe sepsis was defined as infection and acute organ dysfunction. Clinical and laboratory variables used in clinical routines were included for screening. Subjects without missing values were included, and the whole dataset was split into training and validation cohorts. The score was coined LASSO score because variable selection was performed using the least absolute shrinkage and selection operator (LASSO) technique. Finally, the LASSO score was evaluated for its discrimination and calibration in the validation cohort. The study developed the LASSO score for mortality prediction in patients with severe sepsis. Although the score had good discrimination and calibration in a randomly selected subsample, external validations are still required.
Yarita, Takashi; Aoyagi, Yoshie; Otake, Takamitsu
2015-05-29
The impact of the matrix effect in GC-MS quantification of pesticides in food using the corresponding isotope-labeled internal standards was evaluated. A spike-and-recovery study of nine target pesticides was first conducted using paste samples of corn, green soybean, carrot, and pumpkin. The observed analytical values using isotope-labeled internal standards were more accurate for most target pesticides than that obtained using the external calibration method, but were still biased from the spiked concentrations when a matrix-free calibration solution was used for calibration. The respective calibration curves for each target pesticide were also prepared using matrix-free calibration solutions and matrix-matched calibration solutions with blank soybean extract. The intensity ratio of the peaks of most target pesticides to that of the corresponding isotope-labeled internal standards was influenced by the presence of the matrix in the calibration solution; therefore, the observed slope varied. The ratio was also influenced by the type of injection method (splitless or on-column). These results indicated that matrix-matching of the calibration solution is required for very accurate quantification, even if isotope-labeled internal standards were used for calibration. Copyright © 2015 Elsevier B.V. All rights reserved.
Results of the 1996 JPL Balloon Flight Solar Cell Calibration Program
NASA Technical Reports Server (NTRS)
Anspaugh, B. E.; Weiss, R. S.
1996-01-01
The 1996 solar cell calibration balloon flight campaign was completed with the first flight on June 30, 1996 and a second flight on August 8, 1996. All objectives of the flight program were met. Sixty-four modules were carried to an altitude of 120,000 ft (36.6 km). Full 1-5 curves were measured on 22 of these modules, and output at a fixed load was measured on 42 modules. This data was corrected to 28 C and to 1 AU (1.496 x 10(exp 8) km). The calibrated cells have been returned to the participants and can now be used as reference standards in simulator testing of cells and arrays.
Lee, K R; Dipaolo, B; Ji, X
2000-06-01
Calibration is the process of fitting a model based on reference data points (x, y), then using the model to estimate an unknown x based on a new measured response, y. In DNA assay, x is the concentration, and y is the measured signal volume. A four-parameter logistic model was used frequently for calibration of immunoassay when the response is optical density for enzyme-linked immunosorbent assay (ELISA) or adjusted radioactivity count for radioimmunoassay (RIA). Here, it is shown that the same model or a linearized version of the curve are equally useful for the calibration of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs and calculation of performance measures of the assay.
Kokubun, Hideya; Ouki, Makiko; Matoba, Motohiro; Kubo, Hiroaki; Hoka, Sumio; Yago, Kazuo
2005-03-01
We developed an HPLC procedure using electrochemical detection for the quantitation of oxycodone and hydrocotarnine in cancer patients serum. An eluent of methanol:acetonitrile:5 mM pH 8 phosphate buffer (2:1:7) was used for the mobile phase. The calibration curve was linear in the range from 10 ng/mL to 100 ng/mL. The recovery of oxycodone and hydrocotarnine was 97.2% and 90.5%, respectively. The relative standard deviations within-runs and between-runs for the assay of oxycodone or hydrocotarnine were less than 4.8%. The method developed here was better than the method reported previously.
Ogawa, Kazuma; Kaneta, Takashi
2016-01-01
Microfluidic paper-based analytical devices (μPADs) were used to detect the iron ion content in the water of a natural hot spring in order to assess the applicability of this process to the environmental analysis of natural water. The μPADs were fabricated using a wax printer after the addition of hydroxylamine into the detection reservoirs to reduce Fe(3+) to Fe(2+), 1,10-phenanthroline for the forming of a complex, and poly(acrylic acid) for ion-pair formation with an acetate buffer (pH 4.7). The calibration curve of Fe(3+) showed a linearity that ranged from 100 to 1000 ppm in the semi-log plot whereas the color intensity was proportional to the concentration of Fe(3+) and ranged from 40 to 350 ppm. The calibration curve represented the daily fluctuation in successive experiments during four days, which indicated that a calibration curve must be constructed for each day. When freshly prepared μPADs were compared with stored ones, no significant difference was found. The μPADs were applied to the determination of Fe(3+) in a sample of water from a natural hot spring. Both the accuracy and the precision of the μPAD method were evaluated by comparisons with the results obtained via conventional spectrophotometry. The results of the μPADs were in good agreement with, but less precise than, those obtained via conventional spectrophotometry. Consequently, the μPADs offer advantages that include rapid and miniaturized operation, although the precision was poorer than that of conventional spectrophotometry.
Müller, Christoph; Vetter, Florian; Richter, Elmar; Bracher, Franz
2014-02-01
The occurrence of the bioactive components caffeine (xanthine alkaloid), myosmine and nicotine (pyridine alkaloids) in different edibles and plants is well known, but the content of myosmine and nicotine is still ambiguous in milk/dark chocolate. Therefore, a sensitive method for determination of these components was established, a simple separation of the dissolved analytes from the matrix, followed by headspace solid-phase microextraction coupled with gas chromatography-tandem mass spectrometry (HS-SPME-GC-MS/MS). This is the first approach for simultaneous determination of caffeine, myosmine, and nicotine with a convenient SPME technique. Calibration curves were linear for the xanthine alkaloid (250 to 3000 mg/kg) and the pyridine alkaloids (0.000125 to 0.003000 mg/kg). Residuals of the calibration curves were lower than 15%, hence the limits of detection were set as the lowest points of the calibration curves. The limits of detection calculated from linearity data were for caffeine 216 mg/kg, for myosmine 0.000110 mg/kg, and for nicotine 0.000120 mg/kg. Thirty samples of 5 chocolate brands with varying cocoa contents (30% to 99%) were analyzed in triplicate. Caffeine and nicotine were detected in all samples of chocolate, whereas myosmine was not present in any sample. The caffeine content ranged from 420 to 2780 mg/kg (relative standard deviation 0.1 to 11.5%) and nicotine from 0.000230 to 0.001590 mg/kg (RSD 2.0 to 22.1%). © 2014 Institute of Food Technologists®
Spectral Measurement of Watershed Coefficients in the Southern Great Plains
NASA Technical Reports Server (NTRS)
Blanchard, B. J. (Principal Investigator); Bausch, W.
1978-01-01
The author has identified the following significant results. It was apparent that the spectra calibration of runoff curve numbers cannot be achieved on watersheds where significant areas of timber were within the drainage area. The absorption of light by wet soil conditions restricts differentiation of watersheds with regard to watershed runoff curve numbers. It appeared that the predominant factor influencing the classification of watershed runoff curve numbers was the difference in soil color and its associated reflectance when dry. In regions where vegetation grown throughout the year, where wet surface conditions prevail or where watersheds are timbered, there is little hope of classifying runoff potential with visible light alone.
Comet P/Halley 1910, 1986: An objective-prism study
NASA Technical Reports Server (NTRS)
Carsenty, U.; Bus, E. S.; Wyckoff, S.; Lutz, B.
1986-01-01
V. M. Slipher of the Lowell Obs. collected a large amount of spectroscopic data during the 1910 apparition of Halley's comet. Three of his post perihelion objective-prism plates were selected, digitized, and subjected to modern digital data reduction procedures. Some of the important steps in the analysis where: (1) Density to intensity conversion for which was used 1910 slit spectra of Fe-arc lamp on similar plates (Sigma) and derived an average characteristic curve; (2) Flux calibration using the fact that during the period June 2 to 7 1910 P/Halley was very close (angular distance) to the bright star Alpha Sex (A0III, V-4.49), and the spectra of both star and comet were recorded on the same plates. The flux distribution of Alpha Sex was assumed to be similar to that of the standard star 58 Aql and derived a sensitivity curve for the system; (3) Atmospheric extinction using the standard curve for the Lowell Obs.; (4) Solar continuum subtraction using the standard solar spectrum binned to the spectral resolution. An example of a flux-calibrated spectrum of the coma (integrated over 87,000km) before the subtraction of solar continuum is presented.
Xiu, Junshan; Liu, Shiming; Sun, Meiling; Dong, Lili
2018-01-20
The photoelectric performance of metal ion-doped TiO 2 film will be improved with the changing of the compositions and concentrations of additive elements. In this work, the TiO 2 films doped with different Sn concentrations were obtained with the hydrothermal method. Qualitative and quantitative analysis of the Sn element in TiO 2 film was achieved with laser induced breakdown spectroscopy (LIBS) with the calibration curves plotted accordingly. The photoelectric characteristics of TiO 2 films doped with different Sn content were observed with UV visible absorption spectra and J-V curves. All results showed that Sn doping could improve the optical absorption to be red-shifted and advance the photoelectric properties of the TiO 2 films. We had obtained that when the concentration of Sn doping in TiO 2 films was 11.89 mmol/L, which was calculated by the LIBS calibration curves, the current density of the film was the largest, which indicated the best photoelectric performance. It indicated that LIBS was a potential and feasible measured method, which was applied to qualitative and quantitative analysis of the additive element in metal oxide nanometer film.
Biodosimetry of heavy ions by interphase chromosome painting
NASA Astrophysics Data System (ADS)
Durante, M.; Kawata, T.; Nakano, T.; Yamada, S.; Tsujii, H.
1998-11-01
We report measurements of chromosomal aberrations in peripheral blood lymphocytes from cancer patients undergoing radiotherapy treatment. Patients with cervix or esophageal cancer were treated with 10 MV X-rays produced at a LINAC accelerator, or high-energy carbon ions produced at the HIMAC accelerator at the National Institute for Radiological Sciences (NIRS) in Chiba. Blood samples were obtained before, during, and after the radiation treatment. Chromosomes were prematurely condensed by incubation in calyculin A. Aberrations in chromosomes 2 and 4 were scored after fluorescence in situ hybridization with whole-chromosome probes. Pre-treatment samples were exposed in vitro to X-rays, individual dose-response curves for the induction of chromosomal aberrations were determined, and used as calibration curves to calculate the effective whole-body dose absorbed during the treatment. This calculated dose, based on the calibration curve relative to the induction of reciprocal exchanges, has a sharp increase after the first few fractions of the treatment, then saturates at high doses. Although carbon ions are 2-3 times more effective than X-rays in tumor sterilization, the effective dose was similar to that of X-ray treatment. However, the frequency of complex-type chromosomal exchanges was much higher for patients treated with carbon ions than X-ray.
Van Hooff, Robbert-Jan; Nieboer, Koenraad; De Smedt, Ann; Moens, Maarten; De Deyn, Peter Paul; De Keyser, Jacques; Brouns, Raf
2014-10-01
We evaluated the reliability of eight clinical prediction models for symptomatic intracerebral hemorrhage (sICH) and long-term functional outcome in stroke patients treated with thrombolytics according to clinical practice. In a cohort of 169 patients, 60 patients (35.5%) received IV rtPA according to the European license criteria. The remaining patients received off-label IV rtPA and/or were treated with intra-arterial thrombolysis. We used receiver operator characteristic curves to analyze the discriminative capacity of the MSS score, the HAT score, the SITS SICH score, the SEDAN score and the GRASPS score for sICH according to the NINDS and the ECASSII criteria. Similarly, the discriminative capacity of the s-TPI, the iScore and the DRAGON score were assessed for the modified Rankin Scale (mRS) score at 3 months poststroke. An area under the curve (c-statistic) >0.8 was considered to reflect good discriminative capacity. The reliability of the best performing prediction model was further examined with calibration curves. Separate analyses were performed for patients meeting the European license criteria for IV rtPA and patients outside these criteria. For prediction of sICH c-statistics were 0.66-0.86 and the MMS yielded the best results. For functional outcome c-statistics ranged from 0.72 to 0.86 with s-TPI as best performer. The s-TPI had the lowest absolute error on the calibration curve for predicting excellent outcome (mRS 0-1) and catastrophic outcome (mRS 5-6). All eight clinical models for outcome prediction after thrombolysis for acute ischemic stroke showed fair predictive value in patients treated according daily practice. The s-TPI had the best discriminatory ability and was well calibrated in our study population. Copyright © 2014 Elsevier B.V. All rights reserved.
SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devic, S; Tomic, N; DeBlois, F
2016-06-15
Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response bymore » using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis is inventor and runs a consulting company for radiochromic films.« less
A method for determination of [Fe3+]/[Fe2+] ratio in superparamagnetic iron oxide
NASA Astrophysics Data System (ADS)
Jiang, Changzhao; Yang, Siyu; Gan, Neng; Pan, Hongchun; Liu, Hong
2017-10-01
Superparamagnetic iron oxide nanoparticles (SPION), as a kind of nanophase materials, are widely used in biomedical application, such as magnetic resonance imaging (MRI), drug delivery, and magnetic field assisted therapy. The magnetic property of SPION has close connection with its crystal structure, namely it is related to the ratio of Fe3+ and Fe2+ which form the SPION. So a simple way to determine the content of the Fe3+ and Fe2+ is important for researching the property of SPION. This review covers a method for determination of the Fe3+ and Fe2+ ratio in SPION by UV-vis spectrophotometry based the reaction of Fe2+ and 1,10-phenanthroline. The standard curve of Fe with R2 = 0.9999 is used for determination the content of Fe2+ and total iron with 2.5 mL 0.01% (w/v) SPION digested by HCl, pH = 4.30 HOAc-NaAc buffer 10 mL, 0.01% (w/v) 1,10-phenanthroline 5 mL and 10% (w/v) ascorbic acid 1 mL for total iron determine independently. But the presence of Fe3+ interfere with obtaining the actual value of Fe2+ (the error close to 9%). We designed a calibration curve to eliminate the error by devising a series of solution of different ratio of [Fe3+]/[Fe2+], and obtain the calibration curve. Through the calibration curve, the error between the measured value and the actual value can be reduced to 0.4%. The R2 of linearity of the method is 0.99441 and 0.99929 for Fe2+ and total iron respectively. The error of accuracy of recovery and precision of inter-day and intra-day are both lower than 2%, which can prove the reliability of the determination method.
Leclerc, Francis; Duhamel, Alain; Deken, Valérie; Grandbastien, Bruno; Leteurtre, Stéphane
2017-08-01
A recent task force has proposed the use of Sequential Organ Failure Assessment in clinical criteria for sepsis in adults. We sought to evaluate the predictive validity for PICU mortality of the Pediatric Logistic Organ Dysfunction-2 and of the "quick" Pediatric Logistic Organ Dysfunction-2 scores on day 1 in children with suspected infection. Secondary analysis of the database used for the development and validation of the Pediatric Logistic Organ Dysfunction-2. Nine university-affiliated PICUs in Europe. Only children with hypotension-low systolic blood pressure or low mean blood pressure using age-adapted cutoffs-and lactatemia greater than 2 mmol/L were considered in shock. We developed the quick Pediatric Logistic Organ Dysfunction-2 score on day 1 including tachycardia, hypotension, and altered mentation (Glasgow < 11): one point for each variable (range, 0-3). Outcome was mortality at PICU discharge. Discrimination (Area under receiver operating characteristic curve-95% CI) and calibration (goodness of fit test) of the scores were studied. This study included 862 children with suspected infection (median age: 12.3 mo; mortality: n = 60 [7.0%]). Area under the curve of the Pediatric Logistic Organ Dysfunction-2 score on day 1 was 0.91 (0.86-0.96) in children with suspected infection, 0.88 (0.79-0.96) in those with low systolic blood pressure and hyperlactatemia, and 0.91 (0.85-0.97) in those with low mean blood pressure and hyperlactatemia; calibration p value was 0.03, 0.36, and 0.49, respectively. A Pediatric Logistic Organ Dysfunction-2 score on day 1 greater than or equal to 8 reflected an overall risk of mortality greater than or equal to 9.3% in children with suspected infection. Area under the curve of the quick Pediatric Logistic Organ Dysfunction-2 score on day 1 was 0.82 (0.76-0.87) with systolic blood pressure or mean blood pressure; calibration p value was 0.89 and 0.72, respectively. A score greater than or equal to 2 reflected a mortality risk greater than or equal to 19.8% with systolic blood pressure and greater than or equal to 15.9% with mean blood pressure. Among children admitted to PICU with suspected infection, Pediatric Logistic Organ Dysfunction-2 score on day 1 was highly predictive of PICU mortality suggesting its use to standardize definitions and diagnostic criteria of pediatric sepsis. Further studies are needed to determine the usefulness of the quick Pediatric Logistic Organ Dysfunction-2 score on day 1 outside of the PICU.
An Empirical Formula From Ion Exchange Chromatography and Colorimetry.
ERIC Educational Resources Information Center
Johnson, Steven D.
1996-01-01
Presents a detailed procedure for finding an empirical formula from ion exchange chromatography and colorimetry. Introduces students to more varied techniques including volumetric manipulation, titration, ion-exchange, preparation of a calibration curve, and the use of colorimetry. (JRH)
Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.
Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J
2010-12-01
Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.
Application of NIR laser diodes to pulse oximetry
NASA Astrophysics Data System (ADS)
Lopez Silva, Sonnia M.; Giannetti, Romano; Dotor, Maria L.; Sendra, Jose R.; Silveira, Juan P.; Briones, Fernando
1999-01-01
A transmittance pulse oximeter based on near-infrared laser diodes for monitoring arterial blood hemoglobin oxygen saturation has been developed and tested. The measurement system consists of the optical sensor, sensor electronics, acquisition board and personal computer. The system has been tested in a two-part experimental study involving human volunteers. A calibration curve was derived and healthy volunteers were monitored under normal and apnea conditions, both with the proposed system and with a commercial pulse oximeter. The obtained results demonstrate the feasibility of using a sensor with laser diodes emitting at specific near-infrared wavelengths for pulse oximetry.
Fiber Fabry-Perot interferometer sensor for measuring resonances of piezoelectric elements
NASA Astrophysics Data System (ADS)
da Silva, Ricardo E.; Oliveira, Roberson A.; Pohl, Alexandre A. P.
2011-05-01
The development of a fiber extrinsic Fabry-Perot interferometer for measuring vibration amplitude and resonances of piezoelectric elements is reported. The signal demodulation method based on the use of an optical spectrum analyzer allows the measurement of displacements and resonances with high resolution. The technique consists basically in monitoring changes in the intensity or the wavelength of a single interferometric fringe at a point of high sensitivity in the sensor response curve. For sensor calibration, three signal processing techniques were employed. Vibration amplitude measurement with 0.84 nm/V sensitivity and the characterization of the piezo resonance is demonstrated.
Measuring the nonlinear elastic properties of tissue-like phantoms.
Erkamp, Ramon Q; Skovoroda, Andrei R; Emelianov, Stanislav Y; O'Donnell, Matthew
2004-04-01
A direct mechanical system simultaneously measuring external force and deformation of samples over a wide dynamic range is used to obtain force-displacement curves of tissue-like phantoms under plain strain deformation. These measurements, covering a wide deformation range, then are used to characterize the nonlinear elastic properties of the phantom materials. The model assumes incompressible media, in which several strain energy potentials are considered. Finite-element analysis is used to evaluate the performance of this material characterization procedure. The procedures developed allow calibration of nonlinear elastic phantoms for elasticity imaging experiments and finite-element simulations.
SPECIFIC HEAT DATA ANALYSIS PROGRAM FOR THE IBM 704 DIGITAL COMPUTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roach, P.R.
1962-01-01
A computer program was developed to calculate the specific heat of a substance in the temperature range from 0.3 to 4.2 deg K, given temperature calibration data for a carbon resistance thermometer, experimental temperature drift, and heating period data. The speciftc heats calculated from these data are then fitted by a curve by the methods of least squares and the specific heats are corrected for the effect of the curvature of the data. The method, operation, program details, and program stops are discussed. A program listing is included. (M.C.G.)
NASA Astrophysics Data System (ADS)
Kaplan, D. A.; Reaver, N.; Hensley, R. T.; Cohen, M. J.
2017-12-01
Hydraulic transport is an important component of nutrient spiraling in streams. Quantifying conservative solute transport is a prerequisite for understanding the cycling and fate of reactive solutes, such as nutrients. Numerous studies have modeled solute transport within streams using the one-dimensional advection, dispersion and storage (ADS) equation calibrated to experimental data from tracer experiments. However, there are limitations to the information about in-stream transient storage that can be derived from calibrated ADS model parameters. Transient storage (TS) in the ADS model is most often modeled as a single process, and calibrated model parameters are "lumped" values that are the best-fit representation of multiple real-world TS processes. In this study, we developed a roving profiling method to assess and predict spatial heterogeneity of in-stream TS. We performed five tracer experiments on three spring-fed rivers in Florida (USA) using Rhodamine WT. During each tracer release, stationary fluorometers were deployed to measure breakthrough curves for multiple reaches within the river. Teams of roving samplers moved along the rivers measuring tracer concentrations at various locations and depths within the reaches. A Bayesian statistical method was used to calibrate the ADS model to the stationary breakthrough curves, resulting in probability distributions for both the advective and TS zone as a function of river distance and time. Rover samples were then assigned a probability of being from either the advective or TS zone by comparing measured concentrations to the probability distributions of concentrations in the ADS advective and TS zones. A regression model was used to predict the probability of any in-stream position being located within the advective versus TS zone based on spatiotemporal predictors (time, river position, depth, and distance from bank) and eco-geomorphological feature (eddies, woody debris, benthic depressions, and aquatic vegetation). Results confirm that TS is spatially variable as a function of spatiotemporal and eco-geomorphological features. A substantial number of samples with nearly equivalent chances of being from the advective or TS zones suggests that the distinction between zones is often poorly defined.
NASA Astrophysics Data System (ADS)
Supanitsky, A. D.; Etchegoyen, A.; Melo, D.; Sanchez, F.
2015-08-01
At present there are still several open questions about the origin of the ultra high energy cosmic rays. However, great progress in this area has been made in recent years due to the data collected by the present generation of ground based detectors like the Pierre Auger Observatory and Telescope Array. In particular, it is believed that the study of the composition of the cosmic rays as a function of energy can play a fundamental role for the understanding of the origin of the cosmic rays. The observatories belonging to this generation are composed of arrays of surface detectors and fluorescence telescopes. The duty cycle of the fluorescence telescopes is ∼10% in contrast with the ∼100% of the surface detectors. Therefore, the energy calibration of the events observed by the surface detectors is performed by using a calibration curve obtained from a set of high quality events observed in coincidence by both types of detectors. The advantage of this method is that the reconstructed energy of the events observed by the surface detectors becomes almost independent of simulations of the showers because just a small part of the reconstructed energy (the missing energy), obtained from the fluorescence telescopes, comes from simulations. However, the calibration curve obtained in this way depends on the composition of the cosmic rays, which can introduce biases in composition analyses when parameters with a strong dependence on primary energy are considered. In this work we develop an analytical method to study these effects. We consider AMIGA (Auger Muons and Infill for the Ground Array), the low energy extension of the Pierre Auger Observatory corresponding to the surface detectors, to illustrate the use of the method. In particular, we study the biases introduced by an energy calibration dependent on composition on the determination of the mean value of the number of muons, at a given distance to the showers axis, which is one of the parameters most sensitive to primary mass and has an almost linear dependence with primary energy.
Takada, Toshihiko; Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi
2017-11-08
Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Prospective cohort study. General medicine departments of three teaching hospitals in Japan. A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0-5.3), negative likelihood ratio of 0.4 (0.2-0.7) and OR of 7.7 (3.0-19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Manohar, Nivedh; Reynoso, Francisco J; Cho, Sang Hyun
2013-08-01
To develop a proof-of-principle L-shell x-ray fluorescence (XRF) imaging system that locates and quantifies sparse concentrations of gold nanoparticles (GNPs) using a benchtop polychromatic x-ray source and a silicon (Si)-PIN diode x-ray detector system. 12-mm-diameter water-filled cylindrical tubes with GNP concentrations of 20, 10, 5, 0.5, 0.05, 0.005, and 0 mg∕cm3 served as calibration phantoms. An imaging phantom was created using the same cylindrical tube but filled with tissue-equivalent gel containing structures mimicking a GNP-loaded blood vessel and approximately 1 cm3 tumor. Phantoms were irradiated by a 3-mm-diameter pencil-beam of 62 kVp x-rays filtered by 1 mm aluminum. Fluorescence∕scatter photons from phantoms were detected at 90° with respect to the beam direction using a Si-PIN detector placed behind a 2.5-mm-diameter lead collimator. The imaging phantom was translated horizontally and vertically in 0.3-mm steps to image a 6 mm×15 mm region of interest (ROI). For each phantom, the net L-shell XRF signal from GNPs was extracted from background, and then corrected for detection efficiency and in-phantom attenuation using a fluorescence-to-scatter normalization algorithm. XRF measurements with calibration phantoms provided a calibration curve showing a linear relationship between corrected XRF signal and GNP mass per imaged voxel. Using the calibration curve, the detection limit (at the 95% confidence level) of the current experimental setup was estimated to be a GNP mass of 0.35 μg per imaged voxel (1.73×10(-2) cm3). A 2D XRF map of the ROI was also successfully generated, reasonably matching the known spatial distribution as well as showing the local variation of GNP concentrations. L-shell XRF imaging can be a highly sensitive tool that has the capability of simultaneously imaging the spatial distribution and determining the local concentration of GNPs presented on the order of parts-per-million level within subcentimeter-sized ex vivo samples and superficial tumors during preclinical animal studies.
Manohar, Nivedh; Reynoso, Francisco J.; Cho, Sang Hyun
2013-01-01
Purpose: To develop a proof-of-principle L-shell x-ray fluorescence (XRF) imaging system that locates and quantifies sparse concentrations of gold nanoparticles (GNPs) using a benchtop polychromatic x-ray source and a silicon (Si)-PIN diode x-ray detector system. Methods: 12-mm-diameter water-filled cylindrical tubes with GNP concentrations of 20, 10, 5, 0.5, 0.05, 0.005, and 0 mg/cm3 served as calibration phantoms. An imaging phantom was created using the same cylindrical tube but filled with tissue-equivalent gel containing structures mimicking a GNP-loaded blood vessel and approximately 1 cm3 tumor. Phantoms were irradiated by a 3-mm-diameter pencil-beam of 62 kVp x-rays filtered by 1 mm aluminum. Fluorescence/scatter photons from phantoms were detected at 90° with respect to the beam direction using a Si-PIN detector placed behind a 2.5-mm-diameter lead collimator. The imaging phantom was translated horizontally and vertically in 0.3-mm steps to image a 6 mm × 15 mm region of interest (ROI). For each phantom, the net L-shell XRF signal from GNPs was extracted from background, and then corrected for detection efficiency and in-phantom attenuation using a fluorescence-to-scatter normalization algorithm. Results: XRF measurements with calibration phantoms provided a calibration curve showing a linear relationship between corrected XRF signal and GNP mass per imaged voxel. Using the calibration curve, the detection limit (at the 95% confidence level) of the current experimental setup was estimated to be a GNP mass of 0.35 μg per imaged voxel (1.73 × 10−2 cm3). A 2D XRF map of the ROI was also successfully generated, reasonably matching the known spatial distribution as well as showing the local variation of GNP concentrations. Conclusions:L-shell XRF imaging can be a highly sensitive tool that has the capability of simultaneously imaging the spatial distribution and determining the local concentration of GNPs presented on the order of parts-per-million level within subcentimeter-sized ex vivo samples and superficial tumors during preclinical animal studies. PMID:23927295
Visible and near-infrared imaging spectrometer (VNIS) for in-situ lunar surface measurements
NASA Astrophysics Data System (ADS)
He, Zhiping; Xu, Rui; Li, Chunlai; Lv, Gang; Yuan, Liyin; Wang, Binyong; Shu, Rong; Wang, Jianyu
2015-10-01
The Visible and Near-Infrared Imaging Spectrometer (VNIS) onboard China's Chang'E 3 lunar rover is capable of simultaneously in situ acquiring full reflectance spectra for objects on the lunar surface and performing calibrations. VNIS uses non-collinear acousto-optic tunable filters and consists of a VIS/NIR imaging spectrometer (0.45-0.95 μm), a shortwave IR spectrometer (0.9-2.4 μm), and a calibration unit with dust-proofing functionality. To been underwent a full program of pre-flight ground tests, calibrations, and environmental simulation tests, VNIS entered into orbit around the Moon on 6 December 2013 and landed on 14 December 2013 following Change'E 3. The first operations of VNIS were conducted on 23 December 2013, and include several explorations and calibrations to obtain several spectral images and spectral reflectance curves of the lunar soil in the Imbrium region. These measurements include the first in situ spectral imaging detections on the lunar surface. This paper describes the VNIS characteristics, lab calibration, in situ measurements and calibration on lunar surface.
Calibration of areal surface topography measuring instruments
NASA Astrophysics Data System (ADS)
Seewig, J.; Eifler, M.
2017-06-01
The ISO standards which are related to the calibration of areal surface topography measuring instruments are the ISO 25178-6xx series which defines the relevant metrological characteristics for the calibration of different measuring principles and the ISO 25178-7xx series which defines the actual calibration procedures. As the field of areal measurement is however not yet fully standardized, there are still open questions to be addressed which are subject to current research. Based on this, selected research results of the authors in this area are presented. This includes the design and fabrication of areal material measures. For this topic, two examples are presented with the direct laser writing of a stepless material measure for the calibration of the height axis which is based on the Abbott- Curve and the manufacturing of a Siemens star for the determination of the lateral resolution limit. Based on these results, as well a new definition for the resolution criterion, the small scale fidelity, which is still under discussion, is presented. Additionally, a software solution for automated calibration procedures is outlined.
NASA Astrophysics Data System (ADS)
Sun, Li-wei; Ye, Xin; Fang, Wei; He, Zhen-lei; Yi, Xiao-long; Wang, Yu-peng
2017-11-01
Hyper-spectral imaging spectrometer has high spatial and spectral resolution. Its radiometric calibration needs the knowledge of the sources used with high spectral resolution. In order to satisfy the requirement of source, an on-orbit radiometric calibration method is designed in this paper. This chain is based on the spectral inversion accuracy of the calibration light source. We compile the genetic algorithm progress which is used to optimize the channel design of the transfer radiometer and consider the degradation of the halogen lamp, thus realizing the high accuracy inversion of spectral curve in the whole working time. The experimental results show the average root mean squared error is 0.396%, the maximum root mean squared error is 0.448%, and the relative errors at all wavelengths are within 1% in the spectral range from 500 nm to 900 nm during 100 h operating time. The design lays a foundation for the high accuracy calibration of imaging spectrometer.
Importance of Calibration Method in Central Blood Pressure for Cardiac Structural Abnormalities.
Negishi, Kazuaki; Yang, Hong; Wang, Ying; Nolan, Mark T; Negishi, Tomoko; Pathan, Faraz; Marwick, Thomas H; Sharman, James E
2016-09-01
Central blood pressure (CBP) independently predicts cardiovascular risk, but calibration methods may affect accuracy of central systolic blood pressure (CSBP). Standard central systolic blood pressure (Stan-CSBP) from peripheral waveforms is usually derived with calibration using brachial SBP and diastolic BP (DBP). However, calibration using oscillometric mean arterial pressure (MAP) and DBP (MAP-CSBP) is purported to provide more accurate representation of true invasive CSBP. This study sought to determine which derived CSBP could more accurately discriminate cardiac structural abnormalities. A total of 349 community-based patients with risk factors (71±5years, 161 males) had CSBP measured by brachial oscillometry (Mobil-O-Graph, IEM GmbH, Stolberg, Germany) using 2 calibration methods: MAP-CSBP and Stan-CSBP. Left ventricular hypertrophy (LVH) and left atrial dilatation (LAD) were measured based on standard guidelines. MAP-CSBP was higher than Stan-CSBP (149±20 vs. 128±15mm Hg, P < 0.0001). Although they were modestly correlated (rho = 0.74, P < 0.001), the Bland-Altman plot demonstrated a large bias (21mm Hg) and limits of agreement (24mm Hg). In receiver operating characteristic (ROC) curve analyses, MAP-CSBP significantly better discriminated LVH compared with Stan-CSBP (area under the curve (AUC) 0.66 vs. 0.59, P = 0.0063) and brachial SBP (0.62, P = 0.027). Continuous net reclassification improvement (NRI) (P < 0.001) and integrated discrimination improvement (IDI) (P < 0.001) corroborated superior discrimination of LVH by MAP-CSBP. Similarly, MAP-CSBP better distinguished LAD than Stan-CSBP (AUC 0.63 vs. 0.56, P = 0.005) and conventional brachial SBP (0.58, P = 0.006), whereas Stan-CSBP provided no better discrimination than conventional brachial BP (P = 0.09). CSBP is calibration dependent and when oscillometric MAP and DBP are used, the derived CSBP is a better discriminator for cardiac structural abnormalities. © American Journal of Hypertension, Ltd 2016. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Photometric behavior and general characteristics of the nova HR Delphini
NASA Astrophysics Data System (ADS)
Raikova, D.
The light curve and the B-V color-index curve of HR Del were constructed on the basis of published UBV observations. From the normal color indices, the effective photosphere temperature and radius were determined using calibrations for normal stars. As the brightness reached its peak, the effective photosphere was expanding with a velocity of approximately 23 km/s, which is more than 10 times less than the gas velocity. This phenomenon is explained by decreasing continuous opacity as the ejected gas expands.
Wang, Hai-Feng; Lu, Hai; Li, Jia; Sun, Guo-Hua; Wang, Jun; Dai, Xin-Hua
2014-02-01
The present paper reported the differential scanning calorimetry-thermogravimetry curves and the infrared (IR) absorption spectrometry under the temperature program analyzed by the combined simultaneous thermal analysis-IR spectrometer. The gas products of coal were identified by the IR spectrometry. This paper emphasized on the combustion at high temperature-IR absorption method, a convenient and accurate method, which measures the content of sulfur in coal indirectly through the determination of the content of sulfur dioxide in the mixed gas products by IR absorption. It was demonstrated, when the instrument was calibrated by varied pure compounds containing sulfur and certified reference materials (CRMs) for coal, that there was a large deviation in the measured sulfur contents. It indicates that the difference in chemical speciations of sulfur between CRMs and the analyte results in a systematic error. The time-IR absorption curve was utilized to analyze the composition of sulfur at low temperatures and high temperatures and then the sulfur content of coal sample was determined by using a CRM for coal with a close composition of sulfur. Therefore, the systematic error due to the difference in chemical speciations of sulfur between the CRM and analyte was eliminated. On the other hand, in this combustion at high temperature-IR absorption method, the mass of CRM and analyte were adjusted to assure the sulfur mass equal and then the CRM and the analyte were measured alternately. This single-point calibration method reduced the effect of the drift of the IR detector and improved the repeatability of results, compared with the conventional multi-point calibration method using the calibration curves of signal intensity vs sulfur mass. The sulfur content results and their standard deviations of an anthracite coal and a bituminous coal with a low sulfur content determined by this modified method were 0.345% (0.004%) and 0.372% (0.008%), respectively. The uncertainty (U, k =2) of sulfur contents of two coal samples was evaluated to be 0.019% and 0.021%, respectively. Two main modifications, namely the calibration using the coal CRM with a similar composition of low-temperature sulfur and high temperature sulfur, and the single-point calibration alternating CRM and analyte, endow the combustion at high temperature-IR absorption method with an accuracy obviously better than that of the ASTM method. Therefore, this modified method has a well potential in the analysis of sulfur content.
Refinement of moisture calibration curves for nuclear gage.
DOT National Transportation Integrated Search
1973-01-01
Over the last three years the Virginia Highway Research Council has directed a research effort toward improving the method of determining the moisture content of soils with a nuclear gage. The first task in this research was the determination of the ...
Validation of pavement performance curves for the mechanistic-empirical pavement design guide.
DOT National Transportation Integrated Search
2009-02-01
The objective of this research is to determine whether the nationally calibrated performance models used in the Mechanistic-Empirical : Pavement Design Guide (MEPDG) provide a reasonable prediction of actual field performance, and if the desired accu...
SENSITIVITY ANALYSIS OF THE USEPA WINS PM 2.5 SEPARATOR
Factors affecting the performance of the US EPA WINS PM2.5 separator have been systematically evaluated. In conjunction with the separator's laboratory calibrated penetration curve, analysis of the governing equation that describes conventional impactor performance was used to ...
Towards a global network of gamma-ray detector calibration facilities
NASA Astrophysics Data System (ADS)
Tijs, Marco; Koomans, Ronald; Limburg, Han
2016-09-01
Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.
Martini, Alberto; Gupta, Akriti; Lewis, Sara C; Cumarasamy, Shivaram; Haines, Kenneth G; Briganti, Alberto; Montorsi, Francesco; Tewari, Ashutosh K
2018-04-19
To develop a nomogram for predicting side-specific extracapsular extension (ECE) for planning nerve-sparing radical prostatectomy. We retrospectively analysed data from 561 patients who underwent robot-assisted radical prostatectomy between February 2014 and October 2015. To develop a side-specific predictive model, we considered the prostatic lobes separately. Four variables were included: prostate-specific antigen; highest ipsilateral biopsy Gleason grade; highest ipsilateral percentage core involvement; and ECE on multiparametric magnetic resonance imaging (mpMRI). A multivariable logistic regression analysis was fitted to predict side-specific ECE. A nomogram was built based on the coefficients of the logit function. Internal validation was performed using 'leave-one-out' cross-validation. Calibration was graphically investigated. The decision curve analysis was used to evaluate the net clinical benefit. The study population consisted of 829 side-specific cases, after excluding negative biopsy observations (n = 293). ECE was reported on mpMRI and final pathology in 115 (14%) and 142 (17.1%) cases, respectively. Among these, mpMRI was able to predict ECE correctly in 57 (40.1%) cases. All variables in the model except highest percentage core involvement were predictors of ECE (all P ≤ 0.006). All variables were considered for inclusion in the nomogram. After internal validation, the area under the curve was 82.11%. The model demonstrated excellent calibration and improved clinical risk prediction, especially when compared with relying on mpMRI prediction of ECE alone. When retrospectively applying the nomogram-derived probability, using a 20% threshold for performing nerve-sparing, nine out of 14 positive surgical margins (PSMs) at the site of ECE resulted above the threshold. We developed an easy-to-use model for the prediction of side-specific ECE, and hope it serves as a tool for planning nerve-sparing radical prostatectomy and in the reduction of PSM in future series. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.
Choo, Min Soo; Jeong, Seong Jin; Cho, Sung Yong; Yoo, Changwon; Jeong, Chang Wook; Ku, Ja Hyeon; Oh, Seung-June
2017-04-01
We aimed to externally validate the prediction model we developed for having bladder outlet obstruction (BOO) and requiring prostatic surgery using 2 independent data sets from tertiary referral centers, and also aimed to validate a mobile app for using this model through usability testing. Formulas and nomograms predicting whether a subject has BOO and needs prostatic surgery were validated with an external validation cohort from Seoul National University Bundang Hospital and Seoul Metropolitan Government-Seoul National University Boramae Medical Center between January 2004 and April 2015. A smartphone-based app was developed, and 8 young urologists were enrolled for usability testing to identify any human factor issues of the app. A total of 642 patients were included in the external validation cohort. No significant differences were found in the baseline characteristics of major parameters between the original (n=1,179) and the external validation cohort, except for the maximal flow rate. Predictions of requiring prostatic surgery in the validation cohort showed a sensitivity of 80.6%, a specificity of 73.2%, a positive predictive value of 49.7%, and a negative predictive value of 92.0%, and area under receiver operating curve of 0.84. The calibration plot indicated that the predictions have good correspondence. The decision curve showed also a high net benefit. Similar evaluation results using the external validation cohort were seen in the predictions of having BOO. Overall results of the usability test demonstrated that the app was user-friendly with no major human factor issues. External validation of these newly developed a prediction model demonstrated a moderate level of discrimination, adequate calibration, and high net benefit gains for predicting both having BOO and requiring prostatic surgery. Also a smartphone app implementing the prediction model was user-friendly with no major human factor issue.
Development and Validation of GC-ECD Method for the Determination of Metamitron in Soil
Tandon, Shishir; Kumar, Satyendra; Sand, N. K.
2015-01-01
This paper aims at developing and validating a convenient, rapid, and sensitive method for estimation of metamitron from soil samples.Determination andquantification was carried out by Gas Chromatography on microcapillary column with an Electron Capture Detector source. The compound was extracted from soil using methanol and cleanup by C-18 SPE. After optimization, the method was validated by evaluating the analytical curves, linearity, limits of detection, and quantification, precision (repeatability and intermediate precision), and accuracy (recovery). Recovery values ranged from 89 to 93.5% within 0.05- 2.0 µg L−1 with average RSD 1.80%. The precision (repeatability) ranged from 1.7034 to 1.9144% and intermediate precision from 1.5685 to 2.1323%. Retention time was 6.3 minutes, and minimum detectable and quantifiable limits were 0.02 ng mL−1 and 0.05 ng g−1, respectively. Good linearity (R 2 = 0.998) of the calibration curves was obtained over the range from 0.05 to 2.0 µg L−1. Results indicated that the developed method is rapid and easy to perform, making it applicable for analysis in large pesticide monitoring programmes. PMID:25733978
A new calibration code for the JET polarimeter.
Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E
2010-05-01
An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.
Grant, S.W.; Hickey, G.L.; Carlson, E.D.; McCollum, C.N.
2014-01-01
Objective/background A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. Methods The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. Results The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76–0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70–0.86) and 0.75 (95% CI 0.65–0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. Conclusion All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. PMID:24837173
Grant, S W; Hickey, G L; Carlson, E D; McCollum, C N
2014-07-01
A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76-0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70-0.86) and 0.75 (95% CI 0.65-0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Results of the 1999 JPL Balloon Flight Solar Cell Calibration Program
NASA Technical Reports Server (NTRS)
Anspaugh, B. E.; Mueller, R. L.; Weiss, R. S.
2000-01-01
The 1999 solar cell calibration balloon flight campaign consisted of two flights, which occurred on June 14, 1999, and July 6, 1999. All objectives of the flight program were met. Fifty-seven modules were carried to an altitude of approximately equal to 120,000 ft (36.6 km). Full I-V curves were measured on five of these modules, and output at a fixed load was measured on forty-three modules (forty-five cells), with some modules repeated on the second flight. This data was corrected to 28 C and to 1 AU (1.496 x 10 (exp 8) km). The calibrated cells have been returned to their owners and can now be used as reference standards in simulator testing of cells and arrays.
Attaining the Photometric Precision Required by Future Dark Energy Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stubbs, Christopher
2013-01-21
This report outlines our progress towards achieving the high-precision astronomical measurements needed to derive improved constraints on the nature of the Dark Energy. Our approach to obtaining higher precision flux measurements has two basic components: 1) determination of the optical transmission of the atmosphere, and 2) mapping out the instrumental photon sensitivity function vs. wavelength, calibrated by referencing the measurements to the known sensitivity curve of a high precision silicon photodiode, and 3) using the self-consistency of the spectrum of stars to achieve precise color calibrations.
Demystifying liver iron concentration measurements with MRI.
Henninger, B
2018-06-01
This Editorial comment refers to the article: Non-invasive measurement of liver iron concentration using 3-Tesla magnetic resonance imaging: validation against biopsy. D'Assignies G, et al. Eur Radiol Nov 2017. • MRI is a widely accepted reliable tool to determine liver iron concentration. • MRI cannot measure iron directly, it needs calibration. • Calibration curves for 3.0T are rare in the literature. • The study by d'Assignies et al. provides valuable information on this topic. • Evaluation of liver iron overload should no longer be restricted to experts.
Analyzing the biosensor signal in flows: studies with glucose optrodes.
Kivirand, K; Floren, A; Kagan, M; Avarmaa, T; Rinken, T; Jaaniso, R
2015-01-01
Responses of enzymatic bio-optrodes in flow regime were studied and an original model was proposed with the aim of establishing a reliable method for a quick determination of biosensor signal parameters, applicable for biosensor calibration. A dual-optrode glucose biosensor, comprising of a glucose bio-optrode and a reference oxygen optrode, both placed into identical flow channels, was developed and used as a model system. The signal parameters of this biosensor at different substrate concentrations were not dependent on the speed of the probe flow and could be determined from the initial part of the biosensor transient phase signal, providing a valuable tool for rapid analysis. In addition, the model helped to design the biosensor system with reduced impact of enzyme inactivation to the system stability (20% decrease of the enzyme activity lead to only a 1% decrease of the slope of the calibration curve) and hence significantly prolong the effective lifetime of bio-optrodes. Copyright © 2014 Elsevier B.V. All rights reserved.
Fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds.
Ozcan, Hakki Mevlut; Sagiroglu, Ayten
2014-08-01
In this study, a novel fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds was developed. The biosensor was constructed by immobilizing tissue homogenate of fresh broad (Vicia faba) on to glassy carbon electrode. For the stability of the biosensor, general immobilization techniques were used to secure the fresh broad tissue homogenate in gelatin-glutaraldehyde cross-linking matrix. In the optimization and characterization studies, the amount of fresh broad tissue homogenate and gelatin, glutaraldehyde percentage, optimum pH, optimum temperature and optimum buffer concentration, thermal stability, interference effects, linear range, storage stability, repeatability and sample applications (Wine, beer, fruit juices) were also investigated. Besides, the detection ranges of thirteen phenolic compounds were obtained with the help of the calibration graphs. A typical calibration curve for the sensor revealed a linear range of 5-60 μM catechol. In reproducibility studies, variation coefficient (CV) and standard deviation (SD) were calculated as 1.59%, 0.64×10(-3) μM, respectively.
Dantan, N; Frenzel, W; Küppers, S
2000-05-31
Flow injection methods utilising the Karl Fischer (KF) reaction with spectrophotometric and potentiometric detection are described for the determination of the trace water content in various organic solvents. Optimisation of the methods resulted in an accessible (linear) working range of 0.01-0.2% water for many solvents studied with a typical precision of 1-2% R.S.D. Only 50 mul of organic solvent was injected and the sampling frequency was about 120 samples per h. Since the slopes of the calibration curves were different for different solvents appropriate calibration was required. Problems associated with spectrophotometric detection and caused by refractive index changes were pointed out and a nested-loop configuration was proposed to overcome this kind of interference. The potentiometric method with a novel flow-through detector cell was shown to surpass the performance of spectrophotometric detection in any respect. The characteristics of the procedures developed made them well applicable for on-line monitoring of technical solvent distillations in an industrial plant.
Cappellari, Manuel; Turcato, Gianni; Forlivesi, Stefano; Zivelonghi, Cecilia; Bovi, Paolo; Bonetti, Bruno; Toni, Danilo
2018-02-01
Symptomatic intracerebral hemorrhage (sICH) is a rare but the most feared complication of intravenous thrombolysis for ischemic stroke. We aimed to develop and validate a nomogram for individualized prediction of sICH in intravenous thrombolysis-treated stroke patients included in the multicenter SITS-ISTR (Safe Implementation of Thrombolysis in Stroke-International Stroke Thrombolysis Register). All patients registered in the SITS-ISTR by 179 Italian centers between May 2001 and March 2016 were originally included. The main outcome measure was sICH per the European Cooperative Acute Stroke Study II definition (any type of intracerebral hemorrhage with increase of ≥4 National Institutes of Health Stroke Scale score points from baseline or death <7 days). On the basis of multivariate logistic model, the nomogram was generated. We assessed the discriminative performance by using the area under the receiver-operating characteristic curve and calibration of risk prediction model by using the Hosmer-Lemeshow test. A total of 15 949 patients with complete data for generating the nomogram was randomly dichotomized into training (3/4; n=12 030) and test (1/4; n=3919) sets. After multivariate logistic regression, 10 variables remained independent predictors of sICH to compose the STARTING-SICH (systolic blood pressure, age, onset-to-treatment time for thrombolysis, National Institutes of Health Stroke Scale score, glucose, aspirin alone, aspirin plus clopidogrel, anticoagulant with INR ≤1.7, current infarction sign, hyperdense artery sign) nomogram. The area under the receiver-operating characteristic curve of STARTING-SICH was 0.739. Calibration was good ( P =0.327 for the Hosmer-Lemeshow test). The STARTING-SICH is the first nomogram developed and validated in a large SITS-ISTR cohort for individualized prediction of sICH in intravenous thrombolysis-treated stroke patients. © 2018 American Heart Association, Inc.
González, Oskar; van Vliet, Michael; Damen, Carola W N; van der Kloet, Frans M; Vreeken, Rob J; Hankemeier, Thomas
2015-06-16
The possible presence of matrix effect is one of the main concerns in liquid chromatography-mass spectrometry (LC-MS)-driven bioanalysis due to its impact on the reliability of the obtained quantitative results. Here we propose an approach to correct for the matrix effect in LC-MS with electrospray ionization using postcolumn infusion of eight internal standards (PCI-IS). We applied this approach to a generic ultraperformance liquid chromatography-time-of-flight (UHPLC-TOF) platform developed for small-molecule profiling with a main focus on drugs. Different urine samples were spiked with 19 drugs with different physicochemical properties and analyzed in order to study matrix effect (in absolute and relative terms). Furthermore, calibration curves for each analyte were constructed and quality control samples at different concentration levels were analyzed to check the applicability of this approach in quantitative analysis. The matrix effect profiles of the PCI-ISs were different: this confirms that the matrix effect is compound-dependent, and therefore the most suitable PCI-IS has to be chosen for each analyte. Chromatograms were reconstructed using analyte and PCI-IS responses, which were used to develop an optimized method which compensates for variation in ionization efficiency. The approach presented here improved the results in terms of matrix effect dramatically. Furthermore, calibration curves of higher quality are obtained, dynamic range is enhanced, and accuracy and precision of QC samples is increased. The use of PCI-ISs is a very promising step toward an analytical platform free of matrix effect, which can make LC-MS analysis even more successful, adding a higher reliability in quantification to its intrinsic high sensitivity and selectivity.
Leteurtre, Stéphane; Leclerc, Francis; Wirth, Jessica; Noizet, Odile; Magnenant, Eric; Sadik, Ahmed; Fourier, Catherine; Cremer, Robin
2004-01-01
Introduction Two generic paediatric mortality scoring systems have been validated in the paediatric intensive care unit (PICU). Paediatric RISk of Mortality (PRISM) requires an observation period of 24 hours, and PRISM III measures severity at two time points (at 12 hours and 24 hours) after admission, which represents a limitation for clinical trials that require earlier inclusion. The Paediatric Index of Mortality (PIM) is calculated 1 hour after admission but does not take into account the stabilization period following admission. To avoid these limitations, we chose to conduct assessments 4 hours after PICU admission. The aim of the present study was to validate PRISM, PRISM III and PIM at the time points for which they were developed, and to compare their accuracy in predicting mortality at those times with their accuracy at 4 hours. Methods All children admitted from June 1998 to May 2000 in one tertiary PICU were prospectively included. Data were collected to generate scores and predictions using PRISM, PRISM III and PIM. Results There were 802 consecutive admissions with 80 deaths. For the time points for which the scores were developed, observed and predicted mortality rates were significantly different for the three scores (P < 0.01) whereas all exhibited good discrimination (area under the receiver operating characteristic curve ≥0.83). At 4 hours after admission only the PIM had good calibration (P = 0.44), but all three scores exhibited good discrimination (area under the receiver operating characteristic curve ≥0.82). Conclusions Among the three scores calculated at 4 hours after admission, all had good discriminatory capacity but only the PIM score was well calibrated. Further studies are required before the PIM score at 4 hours can be used as an inclusion criterion in clinical trials. PMID:15312217
Analysis of PVC plasticizers in medical devices and infused solutions by GC-MS.
Bourdeaux, Daniel; Yessaad, Mouloud; Chennell, Philip; Larbre, Virginie; Eljezi, Teuta; Bernard, Lise; Sautou, Valerie
2016-01-25
In 2008, di-(2-ethylhexyl) phthalate (DEHP), was categorized as CMR 1B under the CLP regulations and its use in PVC medical devices (MD) was called into question by the European authorities. This resulted in the commercialization of PVC MDs plasticized with the DEHP alternative plasticizers tri-octyl trimellitate (TOTM), di-(2-ethylhexyl) terephthalate (DEHT), di-isononyl cyclohexane-1,2-dicarboxylate (DINCH), di-isononyl phthalate (DINP), di-(2-ethylhexy) adipate (DEHA), and Acetyl tri-n-butyl citrate (ATBC). The data available on the migration of these plasticizers from the MDs are too limited to ensure their safe use. We therefore developed a versatile GC-MS method to identify and quantify both these newly used plasticizers and DEHP in MDs and to assess their migration abilities in simulant solution. The use of cubic calibration curves and the optimization of the analytical method by an experimental plan allowed us to lower the limit of plasticizer quantification. It also allowed wide calibration curves to be established that were adapted to this quantification in MDs during migration tests, irrespective of the amount present, and while maintaining good precision and accuracy. We then tested the developed method on 32 PVC MDs used in our hospital and evaluated the plasticizer release from a PVC MD into a simulant solution during a 24h migration test. The results showed a predominance of TOTM in PVC MDs accompanied by DEHP (<0.1% w/w), DEHT, and sometimes DEHA. The migration tests showed a difference in the migration ability between the plasticizers and a non-linear kinetic release. Copyright © 2015 Elsevier B.V. All rights reserved.
INFLUENCE OF MATERIAL MODELS ON PREDICTING THE FIRE BEHAVIOR OF STEEL COLUMNS.
Choe, Lisa; Zhang, Chao; Luecke, William E; Gross, John L; Varma, Amit H
2017-01-01
Finite-element (FE) analysis was used to compare the high-temperature responses of steel columns with two different stress-strain models: the Eurocode 3 model and the model proposed by National Institute of Standards and Technology (NIST). The comparisons were made in three different phases. The first phase compared the critical buckling temperatures predicted using forty seven column data from five different laboratories. The slenderness ratios varied from 34 to 137, and the applied axial load was 20-60 % of the room-temperature capacity. The results showed that the NIST model predicted the buckling temperature as or more accurately than the Eurocode 3 model for four of the five data sets. In the second phase, thirty unique FE models were developed to analyze the W8×35 and W14×53 column specimens with the slenderness ratio about 70. The column specimens were tested under steady-heating conditions with a target temperature in the range of 300-600 °C. The models were developed by combining the material model, temperature distributions in the specimens, and numerical scheme for non-linear analyses. Overall, the models with the NIST material properties and the measured temperature variations showed the results comparable to the test data. The deviations in the results from two different numerical approaches (modified Newton Raphson vs. arc-length) were negligible. The Eurocode 3 model made conservative predictions on the behavior of the column specimens since its retained elastic moduli are smaller than those of the NIST model at elevated temperatures. In the third phase, the column curves calibrated using the NIST model was compared with those prescribed in the ANSI/AISC-360 Appendix 4. The calibrated curve significantly deviated from the current design equation with increasing temperature, especially for the slenderness ratio from 50 to 100.
Information Management Systems in the Undergraduate Instrumental Analysis Laboratory.
ERIC Educational Resources Information Center
Merrer, Robert J.
1985-01-01
Discusses two applications of Laboratory Information Management Systems (LIMS) in the undergraduate laboratory. They are the coulometric titration of thiosulfate with electrogenerated triiodide ion and the atomic absorption determination of calcium using both analytical calibration curve and standard addition methods. (JN)
The Measurement of Magnetic Fields
ERIC Educational Resources Information Center
Berridge, H. J. J.
1973-01-01
Discusses five experimental methods used by senior high school students to provide an accurate calibration curve of magnet current against the magnetic flux density produced by an electromagnet. Compares the relative merits of the five methods, both as measurements and from an educational viewpoint. (JR)
Diagnosing Prion Diseases: Mass Spectrometry-Based Approaches
USDA-ARS?s Scientific Manuscript database
Mass spectrometry is an established means of quantitating the prions present in infected hamsters. Calibration curves relating the area ratios of the selected analyte peptides and their oxidized analogs to stable isotope labeled internal standards were prepared. The limit of detection (LOD) and limi...